Ethical Considerations for AI Use in Healthcare Research
Article information
The utilization of artificial intelligence (AI) in healthcare has seen a significant upsurge [1], with its applications extending to clinical contexts in order to improve patient services [2]. Additionally, AI’s role in fostering innovation in health and wellness is increasingly evident [3]. While these tools are instrumental in advancing scientific frontiers, it is crucial to address the ethical implications of using AI in research to prevent conflicts within the scientific community and ensure the advancement of knowledge in the respective fields. This paper aims to address some of the ethical aspects of employing AI in healthcare research and its feasibility in generating scientific publications.
In recent years, the integration of AI into various fields has accelerated, particularly following the launch of ChatGPT in November 2022 [4,5]. This is notably evident in healthcare and medicine, where the United States healthcare sector is projected to invest up to $150 billion annually in AI by 2026 [6].
Furthermore, these resources have grown in importance across a range of scientific research applications, such as data analysis, literature reviews, and hypothesis development. In data analysis, large-scale datasets can be processed and interpreted by AI, revealing trends and insights that conventional techniques might miss. AI tools are especially helpful in literature reviews, where they can quickly summarize lengthy scientific texts and highlight important findings, trends, and gaps in the literature because of their ability to understand and synthesize information. They are also skilled at devising original hypotheses by using their in-depth training on a variety of datasets to suggest creative research problems and possible answers. For instance, a study by Qureshi et al. [7] showed how well these tools work to automate and improve literature reviews, dramatically shortening the amount of time needed for this important research task. In a similar vein, a paper by Zhou et al. [8] investigated how AI can support the generation of hypotheses by offering innovative and data-driven recommendations that researchers might not have thought of. These qualities make these instruments potent allies in the advancement of innovation and science. In the realm of scientific research, the development and use of AI-powered tools are increasing daily [9], prompting ethical considerations. Key questions arise, such as whether the use of AI in healthcare research is permissible, and if so, to what extent and in which areas? Furthermore, when AI contributes to the production of research material, a debate arises regarding whether to acknowledge the AI in the author list, reference section, or not at all.
To discuss the ethics of the use of AI in research, it is essential first to reflect on AI ethics more broadly. In 2017, Dr. Hawkins raised an alarm about the rapid advancement of AI, proposing the establishment of a distinct regulatory body for overseeing AI usage [10]. Recommendations have been made to direct AI’s moral and behavioral frameworks toward positive outcomes, ensuring they align with global norms and regulations. As a result, there is a consensus that the adverse impacts of AI can be regulated and mitigated through universally agreed-upon laws [11].
The publisher of Nature recognizes the use of AI in research papers, stipulating its mention in either the methods or acknowledgments section, or in another appropriate section, should the article lack these. This policy ensures transparency regarding AI usage. It is noted that the writing patterns of tools like ChatGPT are generally recognizable by experienced reviewers and editors. The text generated by these tools may sometimes appear repetitive, redundant, or uninspired, partly because they do not have complete access to article texts and cannot always draw accurate conclusions from the available information. However, Nature explicitly disallows listing AI-powered tools as co-authors. Their rationale is that authorship implies active involvement in the scientific article’s creation process, a responsibility that large language models (LLMs) like ChatGPT inherently lack [12].
Meanwhile, the American Association for the Advancement of Science, which publishes Science, asserts that original work, which cannot be replicated by LLMs, is fundamental to research integrity. The utilization of AI in academic papers is not considered original, as it involves no novel contribution by the researcher. Merely rephrasing AI-generated content could be viewed as plagiarism. Hence, they completely ban the use of any form of AI in the process of production of a scientific paper, including facts, figures, and tables. For them, it is considered unethical to involve machines in formulating research questions and seeking answers, as these activities are regarded as exclusively human endeavors and machines can only play a role in the process of experiment design and result interpretation [5,13,14].
Moreover, significant concerns about racial and background biases in AI systems during scientific research have been brought to light by recent studies. LLMs are trained using data that frequently reflects prejudices and inequalities in society. This leads to biases in the models. For example, a study by Bender et al. [15] showed that AI models, even those employed in scientific research, have the potential to reinforce and even magnify these biases, producing skewed findings and conclusions that unfairly disadvantage marginalized groups. This is especially problematic in the social sciences and healthcare domains, where biased AI can lead to misleading policy recommendations and research findings. To address these biases, training data diversity and representativeness must be increased, and strict bias detection and mitigation techniques must be put in place.
However, by expediting the sharing of information, strengthening peer review, and increasing the accessibility of scientific publications, AI-driven publishing research has the potential to completely transform the healthcare industry. Large volumes of research data can be analyzed by AI algorithms to find gaps and new trends, enabling more impactful and targeted publications [16]. Furthermore, AI can help speed up publication and enhance the caliber of published research by automating certain aspects of the peer review process, including the detection of plagiarism, the verification of statistical correctness, and even the assessment of the novelty and significance of research findings. Additionally, through better indexing and summarization, AI-driven tools can improve research articles’ discoverability, making it simpler for researchers and healthcare professionals to find pertinent information quickly [17]. These developments may result in more effective knowledge transfer, encouraging creativity and improving patient care.
In light of the above discussions, debate remains regarding whether researchers should be permitted to use LLMs. However, one aspect is unequivocally clear: researchers cannot replicate the output of these tools and present it as their original work for publication. Depending on each publisher’s policies, there may be situations where using AI-powered tools is acceptable, while in other scenarios, it might be strictly prohibited.
Two recent decisions on copyright laws have sealed the case that AI-powered tools cannot be considered as authors, on the grounds of US Code, Title 17, which stipulates that authors must be human [18]. One relevant case is “Thaler v. Perlmutter,” where a painting created by AI was not granted copyright because it was not considered a valid issue under the copyright law [19]. Similarly, in the case of “Naruto v. Slater,” a selfie taken by a monkey was also denied copyright protection because the monkey was non-human [20].
In conclusion, researchers should meticulously review and adhere to the publisher’s guidelines when referencing LLMs in their work. Some publishers provide specific guidelines for citing generative AI content, while others may not have specific guidelines. It is important to disclose the usage of AI tools in the creation of work for publication and to provide accurate and properly referenced citations for any content generated or revised by these tools.
Notes
Conflict of Interest
No potential conflict of interest relevant to this article was reported.