From the Editor

Artificial Intelligence for Oncology Nursing Authors: Potential Utility and Concerns About Large Language Model Chatbots

Debra Lyon

artificial intelligence, large language model, chatbots, ChatGPT, research, publishing
ONF 2023, 50(3), 276-277. DOI: 10.1188/23.ONF.276-277

Artificial intelligence is a revolution in the computing and data scientist era that has led to excitement and controversy in many fields, including research and publishing. As we move further into the artificial intelligence era, particularly with the advent of GPT (Generative Pre-Trained Transformer) chatbots and other tools using large language model (LLM) methodology, there are many unanswered questions and concerns for usage in academic and healthcare publishing.

Jump to a section

    Artificial intelligence is a revolution in the computing and data scientist era that has led to excitement and controversy in many fields, including research and publishing. As we move further into the artificial intelligence era, particularly with the advent of GPT (Generative Pre-Trained Transformer) chatbots and other tools using large language model (LLM) methodology, there are many unanswered questions and concerns for usage in academic and healthcare publishing. Much of the recent discussion focuses on ChatGPT, an LLM chatbot that composes text-based responses that mimic human language in response to prompts (OpenAI, 2022). The promulgation of LLM chatbots has led to intense discussion in academic and editorial circles related to concerns about defining appropriate usage. There is concern about the use of chatbots to forgo the critical thinking and ethical writing processes that are required aspects of academic publishing. Although there are multiple valid perspectives, early focus in academic circles has focused on limiting or denying authors the use of ChatGPT and other LLMs in research and writing processes. Instead of considering the potential utility of these innovations, heightened attention to developing methods for stopping the use of these tools and for tracking the use of chatbots is a hot topic. Although current plagiarism detectors are being adapted to detect the use of artificial intelligence–generated text, the ability to stay ahead of chatbot technologies may be a rather futile endeavor, given the rapid uptake and potential utility of these tools in the scientific and publishing arenas. As with any revolutionary advance, there must be a high degree of caution regarding the uptake of new technologies and their integration into the standard processes.

    Presently, authors may find conflicting guidance, from multiple sources, leading to questions about how to proceed without unwittingly demonstrating scientific misconduct. With clearly stated parameters of appropriate usage, there are multiple ways in which ChatGPT and other chatbots may facilitate scholarly publication and increase equity in access to publication. With the caveat that ChatGPT and other LLM chatbots are new tools and the full implications are not yet known, initial evidence is that there are multiple potential uses in scholarly writing. ChatGPT can be leveraged to improve the efficiency and accuracy of the writing process. For example, ChatGPT can assist with developing literature reviews, summarizing research findings, and performing language translations (Biswas, 2023). ChatGPT can help in paraphrasing sentences, translating, identifying spelling mistakes, correcting grammatical errors, improving clarity and text flow, generating drafts of outlines and abstracts based on the author’s full text, formatting and converting references between different styles, and more (Koo, 2023). For authors who speak English as an additional language, the use of chatbots may facilitate less costly and more rapid development of manuscripts for publication, thus contributing to higher levels of equity in publication for authors from less resourced countries. These features are particularly relevant for researchers from lower-income countries and others who are facing disparities in publishing scholarship and research papers. ChatGPT also has potential uses in data analysis, performing pattern recognition that can be used to perform tasks such as text classification, sentiment analysis, or entity recognition (Hassani & Silva, 2023). Chatbots may also have access to different data than traditional academic databases as future plugins are developed. These expanded data could be particularly useful in areas such as cancer survivorship, where the experiences of patients and their families on blogs and other media need to be integrated with clinical and research perspectives, ultimately providing integrated views of patients and survivors.

    Although there is great potential for use of chatbots, there are multiple ethical and legal concerns that must be addressed prior to the integration of ChatGPT and other chatbots into the scholarly process (Committee on Publication Ethics, 2021). Privacy and human subjects protection must be fully explored prior to using these tools for human subjects research. Using chatbots with protected data is not permitted. There are also intellectual property issues, given that ChatGPT and other chatbots may generate text that is copyright protected without indicating correct attribution (Lund & Wang, in press). Importantly, robust discussion of these tools must take place now and continue into the future about the acceptable uses of chatbots in scholarly writing and the research process. Authors need guidance about how to use these tools in a scientifically defensible manner. Editors need to consider a middle ground that permits the use of chatbots in appropriate parts of manuscript development and finalization while still holding authors accountable for the integrity of their scholarly processes and products.

    For now, we encourage authors to keep abreast of the rapidly expanding literature on the use of chatbots in academic writing and research. At Oncology Nursing Forum, we will continue to increase our awareness of the potential utility of these tools while also attending to the ethical, legal, and privacy issues that are our duty and responsibility to uphold. I remain cautiously optimistic that the use of LLM chatbots and other emerging technologies may help to promote equity for authors and speed up the dissemination of research while preserving the tenets of scholarly publication ethics. However, we will be vigilant and are happy to have open discussions with our authors as we navigate the advent of these new artificial intelligence technologies.

    About the Author

    Debra Lyon, RN, PhD, FNP-BC, FAAN, is the executive associate dean and Kirbo Endowed Chair in the College of Nursing at the University of Florida in Gainesville. Lyon can be reached at ONFEditor@ons.org.

    References

    Biswas, S. (2023). ChatGPT and the future of medical writing. Radiology. https://doi.org/10.1148/radiol.223312

    Committee on Publication Ethics. (2021). Artificial intelligence (AI) in decision making. https://doi.org/10.24318/9kvAgrnJ

    Hassani, H., & Silva, E.S. (2023). The role of ChatGPT in data science: How AI-assisted conversational interfaces are revolutionizing the field. Big Data and Cognitive Computing, 7(2), 62. https://doi.org/10.3390/bdcc7020062

    Koo, M. (2023). The importance of proper use of ChatGPT in medical writing. Radiology, 230312. https://doi.org/10.1148/radiol.230312

    Lund, B.D., & Wang, T. (in press). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News.

    OpenAI. (2022). Introducing ChatGPT. https://openai.com/blog/chatgpt