Healthc Inform Res Search

CLOSE


Healthc Inform Res > Volume 31(2); 2025 > Article
Ju, Park, Jeong, Lee, Kim, Seong, and Lee: Generative AI-Based Nursing Diagnosis and Documentation Recommendation Using Virtual Patient Electronic Nursing Record Data

Abstract

Objectives

Nursing documentation consumes approximately 30% of nurses’ professional time, making improvements in efficiency essential for patient safety and workflow optimization. This study compares traditional nursing documentation methods with a generative artificial intelligence (AI)-based system, evaluating its effectiveness in reducing documentation time and ensuring the accuracy of AI-suggested entries. Furthermore, the study aims to assess the system’s impact on overall documentation efficiency and quality.

Methods

Forty nurses with a minimum of 6 months of clinical experience participated. In the pre-assessment phase, they documented a nursing scenario using traditional electronic nursing records (ENRs). In the post-assessment phase, they used the SmartENR AI version, developed with OpenAI’s ChatGPT 4.0 API and customized for domestic nursing standards; it supports NANDA, SOAPIE, Focus DAR, and narrative formats. Documentation was evaluated on a 5-point scale for accuracy, comprehensiveness, usability, ease of use, and fluency.

Results

Participants averaged 64 months of clinical experience. Traditional documentation required 467.18 ± 314.77 seconds, whereas AI-assisted documentation took 182.68 ± 99.71 seconds, reducing documentation time by approximately 40%. AI-generated documentation received scores of 3.62 ± 1.29 for accuracy, 4.13 ± 1.07 for comprehensiveness, 3.50 ± 0.93 for usability, 4.80 ± 0.61 for ease of use, and 4.50 ± 0.88 for fluency.

Conclusions

Generative AI substantially reduces the nursing documentation workload and increases efficiency. Nevertheless, further refinement of AI models is necessary to improve accuracy and ensure seamless integration into clinical practice with minimal manual modifications. This study underscores AI’s potential to improve nursing documentation efficiency and accuracy in future clinical settings.

I. Introduction

Generative artificial intelligence (AI)-powered large language models (LLMs) have considerable potential to advance nursing by facilitating learning, enhancing digital literacy, and promoting critical thinking [1,2]. Integrating AI-assisted chatbot technologies into problem-based learning environments can offer nurses valuable practical experience [3]. Nursing practice generally adheres to a structured, patient-centered process that includes five sequential stages: assessment, diagnosis, planning, implementation, and evaluation [4]. Adherence to this process is essential for ensuring safe and cost-effective nursing care. Among these stages, nursing diagnosis involves assessing patients’ health conditions and identifying nursing-related problems.
In contrast to medical diagnoses, nursing diagnoses enable nurses to evaluate patient conditions within their professional scope, identify nursing problems, and develop appropriate care plans [5]. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) mandates the active use of nursing diagnoses as part of accreditation requirements for healthcare institutions in the United States [6]. Furthermore, nursing diagnosis is a globally adopted approach in nursing practice, and the Korean Ministry of Health and Welfare has incorporated it as a key component of institutional accreditation standards [7]. However, formulating nursing diagnoses requires extensive expertise and experience, which makes the process particularly challenging for novice and inactive nurses. Consequently, this process is frequently time-consuming and may suffer from a lack of quality assurance. Moreover, accurate nursing diagnoses demand thorough data collection and analysis; however, workforce shortages often prevent nurses from gathering the required patient information effectively [8].
Various studies report that nursing documentation consumes between 25% and 41% of nurses’ total working hours [7,9,10]. The shift to electronic health records has further increased the documentation workload, and although documentation efficiency typically improves with experience, high attrition rates have been observed during the adaptation period [11].
Large language models (LLMs) are AI language models that utilize extensive neural networks, often containing billions of parameters [12]. LLMs learn from vast amounts of unlabeled text using self-supervised learning, and they have demonstrated remarkable performance across multiple tasks, thereby driving innovation in natural language processing research [13]. This advancement underscores the need for general-purpose models capable of addressing a wide range of tasks, rather than relying solely on specialized supervised models [14]. In November 2022, OpenAI released Chat Generative Pretrained Transformer 3.5 (ChatGPT-3.5), based on the GPT-2 model, and has subsequently introduced newer versions. ChatGPT is an AI-based LLM that continually improves using supervised and reinforcement learning techniques. Unlike traditional search engines that offer generalized information, ChatGPT has gained notable attention for its ability to generate personalized responses to specific queries [15].
Research on generative AI for medical documentation is expanding rapidly. For instance, some studies have demonstrated that surgical reports, which formerly took over 15 minutes to generate, can now be produced in under 10 seconds using AI [16]. Other studies have shown that AI-generated discharge summaries can reduce the documentation burden on healthcare professionals [17,18]. Applying this AI-driven automation to nursing documentation could help clinical nurses reduce their documentation workload and increase the time available for direct patient care [19,20]. However, the real-world application of generative AI in hospital settings remains in its infancy, and privacy concerns have hindered the large-scale implementation of AI models trained on actual clinical data for nursing and medical documentation.
To address these challenges, this study aims to develop and evaluate a generative AI-based nursing diagnosis recommendation system that utilizes virtual patient data. The objective is to improve the efficiency and feasibility of nursing processes, including diagnosis, intervention, and evaluation. The specific objectives of this study were as follows:
  • 1) Compare the time required for nursing documentation between conventional manual electronic nursing records (ENRs) and AI-assisted nursing diagnosis recommendations.

  • 2) Compare the quality of nursing documentation generated manually by nurses with that produced using the generative AI-based system.

  • 3) Evaluate the AI-generated nursing documentation in terms of accuracy, comprehensiveness, usability, ease of use, and fluency.

II. Methods

1. Participants

The participants in this study were nurses with clinical experience who voluntarily agreed to participate after fully understanding the study’s purpose and procedures. The specific inclusion and exclusion criteria are as follows. The required sample size was calculated using the G*Power 3.1 program (https://www.gpower.hhu.de). Assuming a medium effect size (0.5), a significance level of 0.05, and a statistical power of 0.9, a minimum of 36 participants was required. With an anticipated dropout rate of 10%, 40 participants were ultimately recruited [21,22].
Participant recruitment was conducted in collaboration with the College of Nursing at Seoul National University and the Collge of Nursing at Ajou University in South Korea. To facilitate recruitment, the research team provided the institutions with comprehensive details on the study’s purpose, inclusion and exclusion criteria, participation schedule, and other pertinent information. Additionally, the institutions were asked to post a recruitment notice link on their online bulletin boards and social media platforms. Interested individuals could indicate their willingness to participate by using the provided link or QR code.
Applicants were required to complete and submit an application form that included their name, age, gender, prior research participation experience, nursing license status, clinical experience, and contact information. Participant names and contact details were accessible only to the research team. The principal investigator and co-researchers coordinated the participation schedules through individual communication with the applicants. Recruitment and testing were conducted online between August 1, 2024, and August 20, 2024.
Inclusion criteria are defined as registered nurses aged 21 to 50 years; holders of a valid South Korean nursing license; and nurses with at least 3 months of clinical experience in a general hospital or higher-level medical institution. Exclusion criteria are defined as nurses with less than 3 months of clinical experience in a medical institution; and nurses who had tested a generative AI-based ENR system within 4 weeks prior to participating in this study.

2. Procedure

The study procedure consisted of two phases: traditional nursing documentation and generative AI-assisted nursing documentation. All evaluations used a standardized method across all participants. After obtaining informed consent, participants completed a pre-survey to provide their demographic information. Participants were then asked to document ENRs based on a clinical scenario related to a disease they were familiar with, chosen from 110 provided virtual patient scenarios. In the second phase, participants used a generative AI-based nursing diagnosis recommendation system to document nursing records. The time taken for each documentation method was recorded in seconds.

1) Step 1 (Traditional nursing documentation)

In this phase, participants documented ENRs based on a selected clinical scenario using their clinical nursing experience. Participants selected one of the following familiar nursing documentation methods: NANDA (North American Nursing Diagnosis Association), SOAPIE (subjective, objective, assessment, plan, intervention, evaluation), Focus DAR (data, action, response), or narrative documentation.
The study was conducted through Zoom meetings, where participants also completed a questionnaire collecting demographic information such as gender, age, nursing license status, and clinical nursing experience. Individual Zoom sessions were conducted with each of the 40 nurse participants rather than group sessions. Each session began with a 5-minute pre-survey, followed by a 10-minute explanation of the ENR system. Participants then proceeded to complete the documentation tasks. Following completion of both documentation methods, a 5-minute post-survey and a 10-minute individual interview were conducted. The Smart-ENR Standard version—an ENR system developed by DKMediInfo (https://www.smartenr.com/) for training nursing students and newly licensed nurses—was used to document nursing assessments, diagnoses, interventions, and outcomes for virtual patients (Figure 1).

2) Step 2 (Generative AI-based nursing documentation recommendation)

In this phase, participants documented ENRs using a generative AI-based nursing diagnosis recommendation system that utilizes virtual patient data. This step aimed to evaluate the effectiveness of the AI-assisted nursing documentation approach. Participants who completed Step 1 then used the SmartENR AI version, developed by the research team. They first entered the patient’s basic demographic information and selected a nursing documentation method. Next, nurses provided a brief, one-line description of the patient’s condition in the prompt field. The generative AI model, trained on existing nursing records, then generated a recommended nursing record based on the provided information. The SmartENR AI version is a cloud-based system that integrates the ChatGPT-4.0 API—a large language model generative AI customized for the South Korean nursing documentation environment. It automatically generates ENRs in various formats, including NANDA, SOAPIE, Focus DAR, and Narrative documentation (Figure 2).
The process of using the generative AI-powered system was as follows:
  • - Select a nursing documentation format from the left-side menu.

  • - Enter the basic information of the virtual patient using an option-based input system.

  • - Provide a brief description of the patient’s condition in the designated prompt field and click the “Generate Nursing Record” button.

  • - Review the AI-generated nursing diagnosis recommendations and, if necessary, modify them based on clinical judgment before copying and pasting the content into the ENR system.

  • - Participants were allowed to revise the generated content freely according to their clinical judgment.

  • - The total time was measured from the moment the prompt was entered, through the AI generation and copy-paste process, until the completion of any revisions and the clicking of the save button.

3) Step 3 (Usability evaluation)

After completing all nursing documentation tasks, participants completed a usability evaluation. The evaluation involved a survey that assessed the system’s accuracy, completeness, applicability, ease of use, and fluency. The multiple-choice questionnaire included five items.

4) Step 4 (Open-ended questions)

Following the multiple-choice survey, participants answered open-ended questions during individual interviews. Their opinions were solicited on the following three items:
  • - How would you evaluate the nursing records generated by the generative AI system?

  • - What advantages did you perceive in the AI-generated nursing records?

  • - What disadvantages or areas for improvement did you identify in the AI-generated nursing records?

3. Statistical Analysis

Participants who met the inclusion and exclusion criteria and completed at least one session of the program were included in the analysis. Those who discontinued or did not participate were considered unassessable and excluded. Descriptive statistics were used to analyze participants’ demographic characteristics, including age, gender, and nursing experience. These statistics included measures such as mean, standard deviation, frequency, and percentage. The time required to complete nursing documentation and the quality of the records were assessed using the conventional ENR system. The same parameters were then evaluated after using the generative AI-based nursing diagnosis recommendation system. The paired t-test was conducted to compare pre-and post-intervention differences in documentation time and record quality, thereby verifying statistically significant changes.

4. Ethical Considerations

This study adhered to the principles of the Declaration of Helsinki to ensure the safety and ethical protection of research participants. The study protocol was reviewed and approved by the Institutional Review Board of a Ministry of Health and Welfare-designated public institution (Approval No. P01-202407-01-049).
The informed consent form provided detailed information on the study’s purpose, procedures, potential risks and benefits, and data privacy measures. The anonymity and confidentiality of the research data were strictly maintained. All personal information collected was securely managed by the research team and was explicitly designated for research purposes only. Participants received a small compensation for their participation.

III. Results

1. Pilot Test Time Measurement

The average age of the participating nurses was 29.4 ± 5.1 years, with most being in their 20s. The average clinical nursing experience was 64.4 ± 62.2 months (approximately 5.4 years), with most participants reporting between 13 months and 3 years of experience. This group exhibited the largest difference in average documentation time between methods (t = 4.27, p = 0.001).
Nursing records documented using the traditional method required an average of 473.9 ± 319.9 seconds from initiation to clicking the “save” button. In contrast, the generative AI-assisted method, in which nurses entered a similar scenario into a prompt and received a recommended nursing record, took an average of 183.0 ± 99.7 seconds, reducing documentation time by 38.6% compared to the traditional method (t = 6.85, p < 0.001).
Among the documentation methods, NANDA was the most preferred (n = 12). The SOAPIE method showed the greatest reduction in documentation time when transitioning from the traditional method to the AI-assisted documentation method (t = 4.97, p = 0.001) (Table 1).

2. Usability Evaluation

The usability evaluation of the generative AI-based nursing documentation system, assessed on a 5-point scale, yielded the following results: accuracy, 3.63 ± 1.29; completeness, 4.13 ± 1.07; applicability, 3.50 ± 0.93; ease of understanding, 4.80 ± 0.61; and fluency, 4.48 ± 0.91. Notably, first-year novice nurses rated the system highly for ease of use, noting that the AI-generated nursing records were easy to understand. However, the accuracy of AI-generated records, compared to self-written nursing documentation, received a relatively lower score, as did the applicability score, suggesting that participants felt modifications were necessary before direct implementation (Table 2).
Following the usability evaluation, participants were interviewed about their experiences with the generative AI-based nursing documentation system (Table 3). In response to the question, “How would you evaluate the nursing records generated by the AI?” many participants described them as “detailed and specific.” When asked about the advantages of AI-generated nursing records, common responses included “convenient, concise, and time-saving.” Regarding disadvantages or potential improvements, participants noted that the records “lacked specificity, contained insufficient medical terminology, and felt broadly textbook-like.”

IV. Discussion

This study aimed to develop a generative AI-powered nursing documentation system by training an AI model on nursing records generated using virtual patients. A pilot test was conducted with clinically experienced nurses to compare the documentation time and usability evaluation between traditional electronic nursing documentation and AI-generated nursing documentation.
Through this study, we identified the potential of generative AI to effectively reduce nurses’ workload. In particular, the deep learning–based nursing diagnosis recommendation system was shown to enhance nurses’ workflow efficiency and reduce documentation time [2]. The majority of study participants responded positively to the AI-assisted nursing documentation system, with nurses reducing their documentation time by an average of 38.6% compared to traditional methods.
A previous study [16] on surgical records written by physicians reported a 99% reduction in documentation time, decreasing from 7.1 minutes to 5.1 seconds. However, that measurement considered only the AI generation time, and on average, 2.1 edits were required. Similarly, this study found that although AI-generated nursing records were created rapidly, additional time was needed for nurses to compose effective prompts using their clinical knowledge and to transfer the generated content into the ENR system. The time required for editing AI-generated nursing records also varied among nurses. These findings suggest that while generative AI has the potential to significantly reduce documentation time, optimizing prompt design and integrating AI more seamlessly into nurses’ workflow will be crucial for maximizing its efficiency. One key factor contributing to the additional time required in this study, compared to previous research [16], was that most participants were using the practice ENR system for the first time. Had the study been conducted using an ENR system already familiar to the participants, documentation time might have been considerably shorter.
In the usability evaluation, the ease-of-use score was 4.80 ± 0.61 on a 5-point scale, indicating that many participants believed the system would be highly beneficial for novice nurses with less than 1 year of experience. This finding aligns with previous research [17] evaluating AI-generated discharge summaries, where the ease-of-use category received the highest score. These results suggest that generative AI has significant potential for application in healthcare education [1,3]. Currently, the 1-year turnover rate for novice nurses in tertiary hospitals in South Korea reaches 50%, highlighting a major workforce issue [23]. The implementation of generative AI-assisted electronic nursing documentation systems is expected to support nurses in their documentation tasks and potentially reduce the turnover rate among new nurses.
The findings of this study indicate that nurses with 3–5 years of clinical experience documented nursing records significantly faster than those with less than 1 year of experience. However, as clinical experience exceeded 5 years, documentation time gradually increased. This trend was observed in both self-written ENRs and AI-assisted nursing records. Although this study was conducted as a pilot test with a limited number of participants, future research should involve a larger sample size to explore the correlation between clinical experience and documentation efficiency more comprehensively.
We also observed that documentation time varied according to the type of documentation method used. Because different healthcare institutions employ varying electronic medical record (EMR) systems, the structure and format of nursing documentation can differ markedly between hospitals. This inconsistency often necessitates additional training when nurses transition between institutions [7]. To address this challenge, future AI-powered nursing documentation systems should be designed to accommodate multiple formats, ensuring adaptability across diverse clinical settings. Furthermore, robust privacy safeguards must be integrated into the system’s design from the outset to prevent data security breaches [2].
Interviews with participants revealed challenges with entering detailed patient information. For effective clinical implementation, it is crucial to integrate hospital data APIs that allow real-time retrieval of patient data. Access to up-to-date medication regimens, vital sign records, laboratory results, and imaging reports would enable generative AI to generate more precise and contextually relevant nursing documentation, thereby improving both usability and accuracy.
This study has several limitations. First, the AI model was trained solely on nursing records generated from virtual patients rather than real patient data, due to strict privacy regulations that prohibit the use of actual patient records outside healthcare institutions. To overcome this limitation, generative AI models could be deployed on hospital servers for On-premise training—a concept gaining traction with recent technological advancements. Second, the pilot test used a nursing student training ENR system instead of a fully operational hospital EMR system. Future studies should be conducted in real clinical settings using institution-specific EMR systems with professional nurses to enhance the applicability of the findings. Third, because participants selected clinical scenarios with which they were familiar, variability in documentation performance may have arisen from differences in case complexity. Fourth, future research should track and compare the number of edits made by participants to better assess the efficiency and usability of AI-generated documentation.
The future of nursing practice is likely to involve widespread adoption of generative AI to reduce nurses’ workload and promote a more efficient and professional clinical environment. This study provides foundational evidence supporting the integration of generative AI with nursing practice to enhance workflow efficiency.
Ultimately, enhancing the accuracy of AI-generated nursing records by training models on diverse nursing documentation data is critical for effective clinical adoption. Moreover, developing AI systems that produce immediately usable nursing records with minimal modifications will be essential. Such advancements will allow nurses to devote more time to direct patient care, thereby elevating the overall quality of nursing services.

Notes

Conflict of Interest

Hongshin Ju, Minsil Park, Hyeonsil Jeong, Mihyeon Seong, and Dongkyun Lee are current or former employees of DKMediInfo, the company that provides the SmartENR service. This study was supported by funding from the Ministry of SMEs and Startups of the Republic of Korea. To ensure objectivity, more than two external experts reviewed the study, and an independent professional agency was commissioned to conduct and report the service performance evaluation. Other than these disclosures, no potential conflicts of interest relevant to this article was reported.

Acknowledgments

This work was supported by the Technology development Program (No. RS-2023-00277268) funded by the Ministry of SMEs and Startups (MSS, Korea).

Figure 1
SmartENR training system for electronic nursing records: (A) Korean standard version and (B) English version 2.0. ENR: electronic nursing record.
hir-2025-31-2-156f1.jpg
Figure 2
SmartENR AI version: (A) Korean service and (B) English service. ENR: electronic nursing record, AI: artificial intelligence.
hir-2025-31-2-156f2.jpg
Table 1
General characteristics of nurses participating in the study (generative AI nursing documentation service pilot test) and time measurements
Number of participants Writing time (s) t (p-value)a

General AI support
All participants 40 (100) 473.9 ± 319.9 183.0 ± 99.7 6.85 (<0.001)

Gender
 Male 2 (5) 224.5 ± 210.0 175 ± 3.5 0.94 (0.520)
 Female 38 (95) 487.0 ± 321.2 187.9 ± 99.8 6.80 (<0.001)

Age (yr) 29.4 ± 5.1 (23–46)
 20–29 23 (57.5) 498.1 ± 345.9 188.2 ± 110.2 5.09 (<0.001)
 30–39 13 (32.5) 434 ± 202.7 182.8 ± 86.9 5.40 (<0.001)
 40–49 4 (10) 462 ± 526.4 152.6 ± 92.1 1.24 (0.250)

Nursing experience (mo) 64.4 ± 62.2
 6–12 5 (12.5) 667.2 ± 333.3 188.6 ± 98.6 4.01 (0.016)
 13–36 14 (35) 494.3 ± 364.8 152.2 ± 93.6 4.27 (0.001)
 37–60 6 (15) 326.8 ± 191.0 149.2 ± 83.1 3.61 (0.015)
 60–120 11 (27.5) 444.5 ± 216.9 248.8 ± 101.5 3.59 (0.005)
 ≥121 4 (10) 462 ± 526.4 152.8 ± 92.1 1.42 (0.025)

Nursing record methods
 NANDA 12 (30) 585.5 ± 350.1 247.3 ± 118.7 2.57 (0.037)
 SOAPIE 11 (27.5) 470.2 ± 350.7 161.1 ± 74.0 4.97 (0.001)
 Focus DAR 9 (22.5) 378.7 ± 165.4 151.8 ± 73.0 4.09 (0.002)
 Narrative 8 (20) 418.5 ± 366.2 151.1 ± 95.1 3.11 (0.011)

Values are presented as number (%) mean ± standard deviation (min–max).

AI: artificial intelligence, NANDA: North American Nursing Diagnosis Association, SOAPIE: Subjective, Objective, Assessment, Plan, Intervention, Evaluation, DAR: Data, Action, Response.

a Paired t-test p-value.

Table 2
Comparison of usability evaluation scores for the generative AI-based nursing documentation recommendation system (n = 40)
Category Question Total score Response pointa
Accuracy Does the content of the AI-generated nursing record match the original (self-written) nursing record? 40 3.63 ± 1.29
 Fully matches the original content 11
 No significant deviations from the original, but some ambiguous content is included 15
 Some differences, misinterpretations, or additional content not present in the original 7
 Multiple differences, misinterpretations, or additional content not present in the original 2
 Does not match the original content overall 5

Completeness Does the nursing record include all necessary information that should be documented? 40 4.13 ± 1.07
 Includes all necessary content for nursing documentation 21
 Some minor omissions, but not critical 7
 Some important content is missing 8
 Many essential details are missing 4
 Contains mostly unnecessary information 0

Applicability Can the AI-generated nursing record be used as a final record without significant modifications? 40 3.50 ± 0.93
 Can be used as a nursing record without modification 4
 Usable after minor (1–2) modifications 20
 Usable after some (3–4) modifications 8
 Requires major revisions before use 8
 Not usable even after modifications 0

Ease of understanding Is the content easy for a first-year nurse to comprehend? 40 4.80 ± 0.61
 Fully understandable 35
 Mostly understandable, but some difficult content 3
 Contains multiple difficult sections 1
 Generally difficult to understand 1
 Completely incomprehensible 0

Fluency Are the sentences in the nursing record linguistically well-structured and natural? 40 4.48 ± 0.91
 No structural, grammatical, lexical, or linguistic issues 26
 1–2 minor linguistic issues 10
 3–4 linguistic issues 2
 5–6 linguistic issues 1
 More than 7 linguistic issues 1

a Response point evaluated on a 5-point scale.

Table 3
Results from interviews about the generative AI-based nursing documentation recommendation system
Open-ended questions Interview results
Self-evaluation of AI-generated nursing records
  • -Detailed and specific, innovative

  • -Clearly presents necessary information, making it easy to transcribe directly and significantly reducing time

  • -Generates accurate nursing records

  • -Focuses more on nursing diagnoses rather than symptoms

  • -Requires additional patient information input

  • -Constructs sentences more naturally than manually written records

  • -Perception may vary depending on nursing experience

  • -Contains textbook-like content, which may limit immediate clinical application

  • -Easy to understand with high readability

  • -Contributes to improving nursing workflow efficiency

  • -AI-generated nursing records could be beneficial if sufficient patient information is provided

Perceived advantages of AI-generated nursing records
  • -Convenient, concise, and time-saving

  • -Accurately identifies nursing interventions and actions

  • -Allows for double-checking of self-written records

  • -High sentence completeness, making it appear as if written by a person

  • -Beneficial for novice nurses unfamiliar with nursing documentation

  • -Provides hints for nursing interventions that may not have been initially considered

  • -Helps prioritize nursing tasks

  • -Reduces documentation time, allowing more focus on patient care

Perceived disadvantages and areas for improvement in AI-generated nursing records
  • -Lacks specificity and medical terminology, feels broadly textbook-like

  • -Needs integration with clinical observation records

  • -Would be beneficial to provide multiple nursing diagnosis recommendations

  • -Requires input of more detailed information

  • -Needs more terms commonly used in clinical practice

  • -Not concise enough for quick readability

  • -Insufficient for establishing long-term nursing plans

References

1. Castonguay A, Farthing P, Davies S, Vogelsang L, Kleib M, Risling T, et al. Revolutionizing nursing education through AI integration: a reflection on the disruptive impact of ChatGPT. Nurse Educ Today 2023;129:105916. https://doi.org/10.1016/j.nedt.2023.105916
crossref pmid
2. Ball Dunlap PA, Michalowski M. Advancing AI data ethics in nursing: future directions for nursing practice, research, and education. JMIR Nurs 2024;7:e62678. https://doi.org/10.2196/62678
crossref pmid pmc
3. Tam W, Huynh T, Tang A, Luong S, Khatri Y, Zhou W. Nursing education in the age of artificial intelligence powered Chatbots (AI-Chatbots): are we ready yet? Nurse Educ Today 2023;129:105917. https://doi.org/10.1016/j.nedt.2023.105917
crossref pmid
4. Ahn J, Park HO. Development of a case-based nursing education program using generative artificial intelligence. J Korean Acad Soc Nurs Educ 2023;29(3):234-46. https://doi.org/10.5977/jkasne.2023.29.3.234
crossref
5. Muler-Staub M, de Graaf-Waar H, Paans W. An internationally consented standard for nursing process-clinical decision support systems in electronic health records. Comput Inform Nurs 2016;34(11):493-502. https://doi.org/10.1097/CIN.0000000000000277
crossref pmid
6. Askari M, Kalankesh LR, Asadzadeh A, Yousefi-Rad K. Classification of wearables use cases in the mirror of JCAHO patient safety goals for Hospitals. Res Sq [Preprint]. 2023 Feb 10 https://doi.org/10.21203/rs.3.rs-2552165/v1
crossref
7. De Groot K, De Veer AJ, Munster AM, Francke AL, Paans W. Nursing documentation and its relationship with perceived nursing workload: a mixed-methods study among community nurses. BMC Nurs 2022;21(1):34. https://doi.org/10.1186/s12912-022-00811-7
crossref pmid pmc
8. Kim MY. The factors influencing the nursing practice readiness of new graduate nurses. J Korean Acad Soc Nurs Educ 2023;29(4):395-404. https://doi.org/10.5977/jkasne.2023.29.4.395
crossref
9. Moore EC, Tolley CL, Bates DW, Slight SP. A systematic review of the impact of health information technology on nurses’ time. J Am Med Inform Assoc 2020;27(5):798-807. https://doi.org/10.1093/jamia/ocz231
crossref
10. Gomes M, Hash P, Orsolini L, Watkins A, Mazzoccoli A. Connecting professional practice and technology at the bedside: nurses’ beliefs about using an electronic health record and their ability to incorporate professional and patient-centered nursing activities in patient care. Comput Inform Nurs 2016;34(12):578-86. https://doi.org/10.1097/CIN.0000000000000280
crossref pmid pmc
11. Yen PY, Kellye M, Lopetegui M, Saha A, Loversidge J, Chipps EM, et al. Nurses’ time allocation and multitasking of nursing activities: a time motion study. AMIA Annu Symp Proc 2018;2018:1137-46.
pmid pmc
12. Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pre-training [Internet]. San Francisco (CA): OpenAI; 2018 [cited 2025 Jan 10]. Available from: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

13. Liu X, Zhang F, Hou Z, Mian L, Wang Z, Zhang J, et al. Self-supervised learning: generative or contrastive. IEEE Trans Knowl Data Eng 2021;35(1):857-76. https://doi.org/10.1109/TKDE.2021.3090866
crossref
14. Masters K, Herrmann-Werner A, Festl-Wietek T, Taylor D. Preparing for Artificial General Intelligence (AGI) in health professions education: AMEE Guide No. 172. Med Teach 2024;46(10):1258-71. https://doi.org/10.1080/0142159X.2024.2387802
crossref pmid
15. OpenAI. ChatGPT: optimizing language models for dialogue [Internet]. San Francisco (CA): OpenAI; 2022 [cited 2025 Jan 14]. Available from: https://openai.com/index/chatgpt/

16. Abdelhady AM, Davis CR. Plastic surgery and artificial intelligence: how ChatGPT improved operation note accuracy, time, and education. Mayo Clin Proc Digit Health 2023;1(3):299-308. https://doi.org/10.1016/j.mcpdig.2023.06.002
crossref pmid pmc
17. Kim H, Jin HM, Jung YB, You SC. Patient-friendly discharge summaries in Korea based on ChatGPT: software development and validation. J Korean Med Sci 2024;39(16):e148. https://doi.org/10.3346/jkms.2024.39.e148
crossref pmid pmc
18. Zaretsky J, Kim JM, Baskharoun S, Zhao Y, Austrian J, Aphinyanaphongs Y, et al. Generative artificial intelligence to transform inpatient discharge summaries to patient-friendly language and format. JAMA Netw Open 2024;7(3):e240357. https://doi.org/10.1001/jama-networkopen.2024.0357
crossref pmid pmc
19. Saban M, Dubovi I. A comparative vignette study: evaluating the potential role of a generative AI model in enhancing clinical decision-making in nursing. J Adv Nurs. 2024 Feb 17 [Epub]. https://doi.org/10.1111/jan.16101
crossref pmid
20. Daungsupawong H, Wiwanitkit V. Role of a generative AI model in enhancing clinical decision-making in nursing. J Adv Nurs 2024;80(11):4750-1. https://doi.org/10.1111/jan.16145
crossref pmid
21. Kim HS, Choi EK, Kim TH, Yun HY, Kim EJ, Hong JJ, et al. Difficulties in end-of-life care and educational needs of intensive care unit nurses: a mixed methods study. Korean J Hosp Palliat Care 2019;22(2):87-99. https://doi.org/10.0000/kjhpc.2019.22.2.87
crossref
22. Serdar CC, Cihan M, Yucel D, Serdar MA. Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies. Biochem Med (Zagreb) 2021;31(1):010502. https://doi.org/10.11613/BM.2021.010502
crossref pmid pmc
23. Park S, Lee JL. Research trend analysis of Korean new graduate nurses using topic modeling. J Korean Acad Soc Nurs Educ 2021;27(3):240-50. https://doi.org/10.5977/jkasne.2021.27.3.240
crossref


ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
FOR CONTRIBUTORS
Editorial Office
1618 Kyungheegung Achim Bldg 3, 34, Sajik-ro 8-gil, Jongno-gu, Seoul 03174, Korea
Tel: +82-2-733-7637, +82-2-734-7637    E-mail: hir@kosmi.org                

Copyright © 2025 by Korean Society of Medical Informatics.

Developed in M2community

Close layer
prev next