Healthc Inform Res Search

CLOSE


Healthc Inform Res > Volume 30(4); 2024 > Article
SeyedAlinaghi, Mirzapour, and Mehraeen: ChatGPT in Healthcare Writing: Advantages and Limitations
New technologies, including the Internet of Things, Chat Generative Pre-trained Transformer (GPT), artificial intelligence (AI), and telecommunication networks, have effectively motivated and facilitated the learning and enhancement of knowledge for everyone. This enables more efficient and beneficial use of digital resources [1,2]. The continuation of this process not only leads to the adoption of methods with excellent and effective characteristics but also drives innovation in the digital world [3]. ChatGPT enables healthcare users to communicate via accessible platforms using text or voice to address their queries [4].

1. Emergence of ChatGPT

The introduction of ChatGPT into the healthcare industry marks a significant evolution and holds great potential for enhancing medical writing. It serves as a reliable information source for researchers, which is a crucial application of ChatGPT in this field. ChatGPT utilizes built-in transformers tailored to the desired model to process medical texts and generate appropriate responses. It is important to note that this AI technique can discern subtle linguistic differences through its training. Generally, based on the content discussed, ChatGPT can be regarded as a significant AI technology capable of processing requests and delivering responses that closely mimic human interaction [5]. ChatGPT is an advanced large language model trained on extensive text datasets specifically for user interactions. While AI-based language models like ChatGPT have demonstrated remarkable capabilities, their effectiveness in real-world applications, particularly in complex fields like medicine that require high-level cognitive skills, remains to be fully assessed [6]. However, the available evidence suggests that healthcare professionals are optimistic about ChatGPT’s vast potential in enhancing clinical decision-making and optimizing clinical workflows [79].

2. Advantages

Recent studies have demonstrated that ChatGPT can deliver appropriate and relevant responses to a broad array of questions, surpassing previous models in both accuracy and efficiency [10,11]. Furthermore, ChatGPT has proven its capability to produce coherent and well-structured text, which is beneficial for tasks such as content creation and summarization [12]. In healthcare writing, ChatGPT offers significant opportunities, including enhanced data gathering and analysis, enhanced communication and accessibility, and support for researchers across various medical research domains [13]. Additionally, ChatGPT’s advantages include natural language generation and scalability, which facilitate meaningful conversations and rapid processing of requests. However, these capabilities also result in a high volume of conversations being managed simultaneously by the system [5]. Overall, ChatGPT boasts benefits such as efficiency, effectiveness, compatibility, exceptional accuracy, cost-effectiveness, content production, and text translation [14].

3. Limitations

Along with its advantages, ChatGPT also has disadvantages. Key challenges include biases stemming from limitations in training data, ethical concerns, technical constraints, and shortcomings in data collection and analysis [13]. Emotional intelligence—often conceptualized in terms of the emotional quotient (EQ)—is another shortcoming of ChatGPT. EQ refers to the ability to positively understand, use, and manage one’s emotions to relieve stress, communicate effectively, empathize with others, overcome challenges, and defuse conflict [15]. Although emotional indicators should be taken into account in conversations and other forms of communication, ChatGPT may have a limited response to emotional indicators, which may be boring for users. The lack of proper empathy can be frustrating for users and cause an unpleasant experience that deters repeated use of system [5]. Bias in responses is another notable drawback. It is crucial to acknowledge that the information set used for training contains inherent biases and inaccuracies, affecting the system’s performance. Since ChatGPT’s responses are solely based on its training data, it may struggle to address more specialized topics accurately from the users’ perspective, leading to uncertainties regarding the correctness of its responses [16].

4. Existing Challenges

In today’s world, manuscript writing is a specialized task that is challenging for many people. ChatGPT has been introduced into this field; however, many healthcare professionals consider it neither effective nor useful. This perception has caused concern among doctors, who fear that their students might rely too heavily on this technology, thus putting minimal effort into their manuscript writing [17]. Despite these concerns, it is important to recognize that science is continually advancing. AI, a subject of much debate, has provided tools that support scientific progress and integrate with various aspects of human life, potentially having a significant impact. One notable impact is in the realm of academic publishing, where healthcare professionals can now submit articles that they have finalized after receiving initial editing from AI [18].

5. Ethical Considerations

Pretrained algorithm-based text generation in healthcare writing can result in biased content, misleading or inaccurate information, the omission of less commonly cited results, or data that are outdated, having been gathered before 2021. It is important to recognize that using ChatGPT involves ethical considerations and restrictions, including issues related to credit, plagiarism, and copyright infringement [19]. ChatGPT in healthcare offers a promising platform for standardizing access to medical information, enhancing patient engagement, and ultimately improving health outcomes. Nevertheless, ethical and legal considerations must be carefully evaluated to ensure its appropriate use prior to implementation [20].

6. Conclusion

ChatGPT can assist in healthcare writing, editing, formatting, creating summaries, and preparing medical texts, or other content that requires sorting and organizing data. However, it is essential to consider the limitations and ethical concerns associated with this technology in healthcare writing. Despite significant advancements in AI, ChatGPT can sometimes produce incorrect interpretations, necessitating further assistance and training for these systems to accurately interpret user requests. Given that the medical community often requires information on specific topics, writing a manuscript can be challenging. While ChatGPT can provide accurate and comprehensive responses, it may not fully understand all scenarios, potentially leading to inappropriate answers [5]. Therefore, it is important for healthcare researchers to focus on natural language processing research and the development and evaluation of expert language models tailored for healthcare writing. Further studies are also recommended to explore the future role of AI and ChatGPT in healthcare writing, with a focus on the accuracy of the information produced and the ethical implications.

Notes

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

1. Mohammadi S, SeyedAlinaghi S, Heydari M, Pashaei Z, Mirzapour P, Karimi A, et al. Artificial intelligence in COVID-19 management: a systematic review. J Comput Sci 2023;19(5):554-68. https://doi.org/10.3844/jcssp.2023.554.568
crossref
2. Afsahi AM, Alinaghi SA, Molla A, Mirzapour P, Jahani S, Razi A, et al. Chatbots utility in healthcare industry: an umbrella review. Front Health Inform 2024;13:200.
crossref pdf
3. Manning LD, Jones JE, Buehlman V, Deal JM, Showalter LJ. A center-based model for self-directed learning in sustainability: engaging campus and community as a living lab. Hughes P, Yarbrough J. In: Self-directed learning and the academic evolution from pedagogy to andragogy. Hershey (PA): IGI Global; 2022 97-118. https://doi.org/10.4018/978-1-7998-7661-8.ch006
crossref
4. SeyedAlinaghi S, Abbaspour F, Mehraeen E. The challenges of ChatGPT in healthcare scientific writing. Shiraz E Med J 2024;25(2):e141861. https://doi.org/10.5812/semj-141861
crossref
5. Kalla D, Smith N, Samaah F, Kuraku S. Study and analysis of chat GPT and its impact on different fields of study. Int J Innov Sci Res Technol 2023;8(3):827-33.

6. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst 2023;47(1):33. https://doi.org/10.1007/s10916-023-01925-4
crossref pmid pmc
7. Hallsworth JE, Udaondo Z, Pedros-Alio C, Hofer J, Benison KC, Lloyd KG, et al. Scientific novelty beyond the experiment. Microb Biotechnol 2023;16(6):1131-73. https://doi.org/10.1111/1751-7915.14222
crossref pmid pmc
8. Kitamura FC. ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology 2023;307(2):e230171. https://doi.org/10.1148/radiol.230171
crossref pmid
9. Stokel-Walker C, Van Noorden R. What ChatGPT and generative AI mean for science. Nature 2023;614(7947):214-6. https://doi.org/10.1038/d41586-023-00340-6
crossref pmid
10. Johnson D, Goodman R, Patrinely J, Stone C, Zimmerman E, Donald R, et al. Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the Chat-GPT model [Internet]. Durham (NC): Research Square; 2023 [cited at 2024 Oct 1]. Available from: https://doi.org/10.21203/rs.3.rs-2566942/v1

11. Samaan JS, Yeo YH, Rajeev N, Hawley L, Abel S, Ng WH, et al. Assessing the accuracy of responses by the language model ChatGPT to questions regarding bariatric surgery. Obes Surg 2023;33(6):1790-6. https://doi.org/10.1007/s11695-023-06603-5
crossref pmid pmc
12. Ren C, Lee SJ, Hu C. Assessing the efficacy of ChatGPT in addressing Chinese financial conundrums: an in-depth comparative analysis of human and AI-generated responses. Comput Hum Behav: Artif Hum 2023;1(2):100007. https://doi.org/10.1016/j.chbah.2023.100007
crossref
13. Alsadhan A, Al-Anezi F, Almohanna A, Alnaim N, Alzahrani H, Shinawi R, et al. The opportunities and challenges of adopting ChatGPT in medical research. Front Med (Lausanne) 2023;10:1259640. https://doi.org/10.3389/fmed.2023.1259640
crossref pmid pmc
14. Ray PP. ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber Phys Syst 2023;3:121-54. https://doi.org/10.1016/j.iotcps.2023.04.003
crossref
15. Raj P. A literature review on emotional intelligence of large language models (LLMs). Int J Adv Res Comput Sci 2024;15(4):30-4. https://doi.org/10.26483/ijarcs.v15i4.7111
crossref
16. Else H. Abstracts written by ChatGPT fool scientists. Nature 2023;613(7944):423. https://doi.org/10.1038/d41586-023-00056-7
crossref pmid
17. McGee RW. Annie Chan: three short stories written with Chat GPT [Internet]. Rochester (NY): SSRN – Elsevier; 2023 [cited at 2024 Oct 1]. Available from: https://ssrn.com/abstract=4359403

18. Xu Y, Liu X, Cao X, Huang C, Liu E, Qian S, et al. Artificial intelligence: a powerful paradigm for scientific research. Innovation (Camb) 2021;2(4):100179. https://doi.org/10.1016/j.xinn.2021.100179
crossref pmid pmc
19. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell 2023;6:1169595. https://doi.org/10.3389/frai.2023.1169595
crossref pmid pmc
20. Awal SS, Awal SS. ChatGPT and the healthcare industry: a comprehensive analysis of its impact on medical writing. J Public Health. 2023 Dec 15 [Epub]. https://doi.org/10.1007/s10389-023-02170-2
crossref


ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
FOR CONTRIBUTORS
Editorial Office
1618 Kyungheegung Achim Bldg 3, 34, Sajik-ro 8-gil, Jongno-gu, Seoul 03174, Korea
Tel: +82-2-733-7637, +82-2-734-7637    E-mail: hir@kosmi.org                

Copyright © 2024 by Korean Society of Medical Informatics.

Developed in M2community

Close layer
prev next