Ethical Considerations for AI Use in Healthcare Research

Article information

Healthc Inform Res. 2024;30(3):286-289
Publication date (electronic) : 2024 July 31
doi : https://doi.org/10.4258/hir.2024.30.3.286
1Iranian Research Center for HIV/AIDS, Iranian Institute for Reduction of High Risk Behaviors, Tehran University of Medical Sciences, Tehran, Iran
2Department of Health Information Technology, Khalkhal University of Medical Sciences, Khalkhal, Iran
Corresponding Author: Esmaeil Mehraeen, Department of Health Information Technology, Khalkhal University of Medical Sciences, Khalkhal 5681761351, Iran. Tel: +98-45-32426801, E-mail: es.mehraeen@gmail.com (https://orcid.org/0000-0003-4108-2973)
Received 2024 January 23; Revised 2024 June 9; Accepted 2024 July 5.

The utilization of artificial intelligence (AI) in healthcare has seen a significant upsurge [1], with its applications extending to clinical contexts in order to improve patient services [2]. Additionally, AI’s role in fostering innovation in health and wellness is increasingly evident [3]. While these tools are instrumental in advancing scientific frontiers, it is crucial to address the ethical implications of using AI in research to prevent conflicts within the scientific community and ensure the advancement of knowledge in the respective fields. This paper aims to address some of the ethical aspects of employing AI in healthcare research and its feasibility in generating scientific publications.

In recent years, the integration of AI into various fields has accelerated, particularly following the launch of ChatGPT in November 2022 [4,5]. This is notably evident in healthcare and medicine, where the United States healthcare sector is projected to invest up to $150 billion annually in AI by 2026 [6].

Furthermore, these resources have grown in importance across a range of scientific research applications, such as data analysis, literature reviews, and hypothesis development. In data analysis, large-scale datasets can be processed and interpreted by AI, revealing trends and insights that conventional techniques might miss. AI tools are especially helpful in literature reviews, where they can quickly summarize lengthy scientific texts and highlight important findings, trends, and gaps in the literature because of their ability to understand and synthesize information. They are also skilled at devising original hypotheses by using their in-depth training on a variety of datasets to suggest creative research problems and possible answers. For instance, a study by Qureshi et al. [7] showed how well these tools work to automate and improve literature reviews, dramatically shortening the amount of time needed for this important research task. In a similar vein, a paper by Zhou et al. [8] investigated how AI can support the generation of hypotheses by offering innovative and data-driven recommendations that researchers might not have thought of. These qualities make these instruments potent allies in the advancement of innovation and science. In the realm of scientific research, the development and use of AI-powered tools are increasing daily [9], prompting ethical considerations. Key questions arise, such as whether the use of AI in healthcare research is permissible, and if so, to what extent and in which areas? Furthermore, when AI contributes to the production of research material, a debate arises regarding whether to acknowledge the AI in the author list, reference section, or not at all.

To discuss the ethics of the use of AI in research, it is essential first to reflect on AI ethics more broadly. In 2017, Dr. Hawkins raised an alarm about the rapid advancement of AI, proposing the establishment of a distinct regulatory body for overseeing AI usage [10]. Recommendations have been made to direct AI’s moral and behavioral frameworks toward positive outcomes, ensuring they align with global norms and regulations. As a result, there is a consensus that the adverse impacts of AI can be regulated and mitigated through universally agreed-upon laws [11].

The publisher of Nature recognizes the use of AI in research papers, stipulating its mention in either the methods or acknowledgments section, or in another appropriate section, should the article lack these. This policy ensures transparency regarding AI usage. It is noted that the writing patterns of tools like ChatGPT are generally recognizable by experienced reviewers and editors. The text generated by these tools may sometimes appear repetitive, redundant, or uninspired, partly because they do not have complete access to article texts and cannot always draw accurate conclusions from the available information. However, Nature explicitly disallows listing AI-powered tools as co-authors. Their rationale is that authorship implies active involvement in the scientific article’s creation process, a responsibility that large language models (LLMs) like ChatGPT inherently lack [12].

Meanwhile, the American Association for the Advancement of Science, which publishes Science, asserts that original work, which cannot be replicated by LLMs, is fundamental to research integrity. The utilization of AI in academic papers is not considered original, as it involves no novel contribution by the researcher. Merely rephrasing AI-generated content could be viewed as plagiarism. Hence, they completely ban the use of any form of AI in the process of production of a scientific paper, including facts, figures, and tables. For them, it is considered unethical to involve machines in formulating research questions and seeking answers, as these activities are regarded as exclusively human endeavors and machines can only play a role in the process of experiment design and result interpretation [5,13,14].

Moreover, significant concerns about racial and background biases in AI systems during scientific research have been brought to light by recent studies. LLMs are trained using data that frequently reflects prejudices and inequalities in society. This leads to biases in the models. For example, a study by Bender et al. [15] showed that AI models, even those employed in scientific research, have the potential to reinforce and even magnify these biases, producing skewed findings and conclusions that unfairly disadvantage marginalized groups. This is especially problematic in the social sciences and healthcare domains, where biased AI can lead to misleading policy recommendations and research findings. To address these biases, training data diversity and representativeness must be increased, and strict bias detection and mitigation techniques must be put in place.

However, by expediting the sharing of information, strengthening peer review, and increasing the accessibility of scientific publications, AI-driven publishing research has the potential to completely transform the healthcare industry. Large volumes of research data can be analyzed by AI algorithms to find gaps and new trends, enabling more impactful and targeted publications [16]. Furthermore, AI can help speed up publication and enhance the caliber of published research by automating certain aspects of the peer review process, including the detection of plagiarism, the verification of statistical correctness, and even the assessment of the novelty and significance of research findings. Additionally, through better indexing and summarization, AI-driven tools can improve research articles’ discoverability, making it simpler for researchers and healthcare professionals to find pertinent information quickly [17]. These developments may result in more effective knowledge transfer, encouraging creativity and improving patient care.

In light of the above discussions, debate remains regarding whether researchers should be permitted to use LLMs. However, one aspect is unequivocally clear: researchers cannot replicate the output of these tools and present it as their original work for publication. Depending on each publisher’s policies, there may be situations where using AI-powered tools is acceptable, while in other scenarios, it might be strictly prohibited.

Two recent decisions on copyright laws have sealed the case that AI-powered tools cannot be considered as authors, on the grounds of US Code, Title 17, which stipulates that authors must be human [18]. One relevant case is “Thaler v. Perlmutter,” where a painting created by AI was not granted copyright because it was not considered a valid issue under the copyright law [19]. Similarly, in the case of “Naruto v. Slater,” a selfie taken by a monkey was also denied copyright protection because the monkey was non-human [20].

In conclusion, researchers should meticulously review and adhere to the publisher’s guidelines when referencing LLMs in their work. Some publishers provide specific guidelines for citing generative AI content, while others may not have specific guidelines. It is important to disclose the usage of AI tools in the creation of work for publication and to provide accurate and properly referenced citations for any content generated or revised by these tools.

Notes

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

1. Mohammadi S, SeyedAlinaghi S, Heydari M, Pashaei Z, Mirzapour P, Karimi A, et al. Artificial intelligence in COVID-19 management: a systematic review. J Comput Sci 2023;19(5):554–68. https://doi.org/10.3844/jcssp.2023.554.568.
2. Yoon SN, Lee D. Artificial intelligence and robots in healthcare: what are the success factors for technology-based service encounters? Int J Healthc Manag 2019;12(3):218–25. https://doi.org/10.1080/20479700.2018.1498220.
3. Lee D, Yoon SN. Application of artificial intelligence-based technologies in the healthcare industry: opportunities and challenges. Int J Environ Res Public Health 2021;18(1):271. https://doi.org/10.3390/ijerph18010271.
4. Euronews. ChatGPT a year on: 3 ways the AI chatbot has completely changed the world in 12 months [Internet] Lyon, France: Euronews; 2023. [cited at 2024 Jul 20]. Available from: https://www.euronews.com/next/2023/11/30/chatgpt-ayear-on-3-ways-the-ai-chatbot-has-completely-changedthe-world-in-12-months#:~:text=Co%2Dfounded%20by%20(ousted%2C,ChatGPT%20on%20November%2030%202022.
5. SeyedAlinaghi S, Abbaspour F, Mehraeen E. The challenges of ChatGPT in healthcare scientific writing. Shiraz E-Med J 2024;25(2):e141861. https://doi.org/10.5812/semj-141861.
6. Safavi K, Kalis B. How AI can change the future of health care [Internet] Boston (MA): Harvard Business Review; 2019. [cited at 2024 Jul 20]. Available from: https://hbr.org/webinar/2019/02/how-ai-can-changethe-future-of-health-care.
7. Qureshi R, Shaughnessy D, Gill KA, Robinson KA, Li T, Agai E. Are ChatGPT and large language models “the answer” to bringing us closer to systematic review automation? Syst Rev 2023;12(1):72. https://doi.org/10.1186/s13643-023-02243-z.
8. Zhou Y, Liu H, Srivastava T, Mei H, Tan C. Hypothesis generation with large language model [Internet] Ithaca (NY): arXiv.org; 2024. [cited at 2024 Jul 20]. Available from: https://arxiv.org/abs/2404.04326.
9. Khedkar S. Using AI-powered tools effectively for academic research [Internet] Princeton (NJ): Editage; 2023. [cited at 2024 Jul 20]. Available from: https://www.editage.com/insights/using-ai-powered-tools-effectively-for-academic-research#:~:text=Researchers%20can%20use%20AI%20tools,an%20important%20aspect%20of%20research.
10. Sulleyman A. Stephen Hawking warns artificial intelligence ‘may replace humans altogether’ [Internet] London, UK: Independent; 2017. [cited at 2024 Jul 30]. Available from: https://www.independent.co.uk/tech/stephen-hawking-artificial-intelligence-fears-ai-willreplace-humans-virus-life-a8034341.html.
11. Lupton M. Some ethical and legal consequences of the application of artificial intelligence in the field of medicine. Trends Med 2018;18(4):100147. https://doi.org/10.15761/TiM.1000147.
12. Bushwick S, Mukerjee M. ChatGPT explains why AIs like ChatGPT should be regulated [Internet] New York (NY): Scientific American; 2022. [cited at 2024 Jul 20]. Available from: https://www.scientificamerican.com/article/chatgpt-explains-why-ais-like-chatgpt-should-beregulated1/.
13. Thorp HH. ChatGPT is fun, but not an author. Science 2023;379(6630):313. https://doi.org/10.1126/science.adg7879.
14. Ciaccio EJ. Use of artificial intelligence in scientific paper writing. Inf Med Unlocked 2023;41:101253. https://doi.org/10.1016/j.imu.2023.101253.
15. Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: can language models be too big? In : Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; 2021 Mar 3–10; Virtual Event, Canada. p. 610–623. https://doi.org/10.1145/3442188.3445922.
16. Vollmer S, Mateen BA, Bohner G, Kiraly FJ, Ghani R, Jonsson P, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ 2020;368:l6927. https://doi.org/10.1136/bmj.l6927.
17. Zimba O, Gasparyan AY. Plagiarism detection and prevention: a primer for researchers. Reumatologia 2021;59(3):132–7. https://doi.org/10.5114/reum.2021.105974.
18. Legal Information Institute. US Code: Title 17 [Internet] Ithaca (NY): Legal Information Institute; 1947. [cited at 2024 Jul 20]. Available from: https://www.law.cornell.edu/uscode/text/17.
19. Casetext. Thaler v. Perlmutter [Internet] San Francisco (CA): Casetext; 2023. [cited at 2024 Jul 20]. Available from: https://casetext.com/case/thaler-v-perlmutter.
20. Casetext. NARUTO v. Slater [Internet] San Francisco (CA): Casetext; 2018. [cited at 2024 Jul 20]. Available from: https://casetext.com/case/naruto-v-slater-2.

Article information Continued