OPINIONS, PERSPECTIVES AND COMMENTARY

Artificial Intelligence-Assisted Technology In Medical Manuscript Writing: New Challenges For Reviewers and Editors

Raouf Hajji, MD, PhD1,2,3symbol

1Medicine Faculty of Sousse, University of Sousse, Sousse, Tunisia; 2Internal Medicine Department, Sidi Bouzid Hospital, Sidi Bouzid, Tunisia; 3International Medical Community (IMC), Rome, Italy

Keywords: artificial intelligence, artificial intelligence-assisted technology, large language models, medical manuscript writing

 

Citation: Telehealth and Medicine Today © 2024, 9: 459.

Doi: https://doi.org/10.30953/thmt.v9.459

Copyright: © 2024 The Authors. This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, adapt, enhance this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0.

Received: December 4, 2023; Accepted: January 1, 2024; Published: February 23, 2024

Funding Statement: The author received no support from any organization for the submitted work. No funding was received to assist with the preparation of this manuscript. No funding was received for conducting this study.

Financial and non-Financial Relationships and Activities: The author is an editorial board member of Telehealth and Medicine Today (THMT) and Blockchain in Healthcare Today (BHTY). The author is a Web of Science Researcher ID: I-1448-2019.

Corresponding Author: Raouf Hajji; raoufhajji2013@gmail.com

 

Since the emergence of generative artificial intelligence (AI) in medical writing, there has been a major debate about its potential and risks for biomedical research and publications. The positions of authors, reviewers, and editors are divergent between those who consider AI an authentic author and those who reject its uses.

Undoubtedly, AI-assisted technology (AI-AT) is dramatically changing the biomedical research landscape. No one can exclude its applications in medical manuscript writing. However, better comprehension of this new technology, its applications, risks, and challenges is vital for the medical publication world. It is creating new challenges for authors, reviewers, and editors. When reviewing and editing an AI-written manuscript, many questions must be answered: Is it accurate? Is it ethical? Is it non-biased? Is it complete? Is there AI hallucination?

On May 2023, the International Committee of Medical Journal Editors (ICMJE) published updated Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. At that time, its position on AI-AT Medical Manuscript writing was updated in detail.1 Clearly, the AI-AT editors and reviewers are facing new challenges during the process of any article written by direct audio input (dAI)-assisted technology.

Different AI-At In Medical Manuscript Writing

Many AI-ATs are available to use at each stage in medical manuscript writing. Five of these are included here.

Large Language Models (LLM)

Using prompts, models, such as Generative Pre-trained Transformer 3 (GPT-3) or GPT-4, can generate human-like text. They assist authors in developing new research ideas, summarizing reference articles, accelerating high-quality writing, revising the text language and grammar, analyzing data, and reporting research standards. In addition, they can aid authors in writing all the sections of medical manuscripts by generating coherent and relevant content.2

Chatbots

AI-powered chatbots can provide real-time assistance to medical researchers and authors during the manuscript writing process. They can answer questions, offer suggestions, and provide guidance on formatting, references, and other writing-related queries.

The ChatGPT, one of these AI-powered chatbots, is a potent medical writing assistant that can help researchers find new research ideas, generate texts and summaries, and revise and proofread manuscripts. However, authors need must be vigilant about how to use it and comprehend its limitations and risks. They need to know adequate prompting to avoid AI hallucinations (i.e., a situation when a Large language model (LLM) has a perception of patterns or objects that are nonexistent or imperceptible to human observers, creating nonsensical or inaccurate outputs).3 It should not, in any case, replace the “human” author or researcher.4,5

Image Creators

There is an exponential increase in the use of AI image generation techniques, such as Generative Adversarial Networks (GANs), in medical manuscript writing as they can generate high-quality images tailored to the specific needs of the manuscript without direct human intervention. They are quick and efficient, especially when authors need a large number of images or when the manual illustration development could be more efficient and practical. Examples include DeepArt (https://creativitywith.ai/deepartio/) and Deep Dream (https://deepdreamgenerator.com/), which are image-generative tools that transform medical images or illustrations and add a creative and informative element to manuscripts.

By their high quality and topic specificity, these AI-generated images can amplify the visual appeal of the manuscript and aid in conveying complex medical concepts. However, authors must carefully consider ethical and legal issues, such as copyright and patient privacy concerns when using AI-generated images in medical manuscripts.68

Automated Literature Review

AI algorithms can assist in conducting automated literature reviews by analyzing vast amounts of medical literature. They can extract relevant information, identify key studies, and summarize findings, thereby saving time and effort during the manuscript preparation phase.911

More advanced AI tools, like IBM Watson Discovery, can develop a fast and efficient literature review process and identify relevant references to create research ideas and explore the development of potential new treatments.

On 28 November 2023, Boehringer Ingelheim and IBM announced an agreement permitting Boehringer to use IBM’s foundation model technologies to discover novel candidate antibodies for the development of efficient therapeutics, especially in the field of the antibody discovery process through in-silico methods.12

Plagiarism Detection

AI-based plagiarism detection tools can ensure the originality of medical manuscripts by comparing the submitted text with a vast database of published articles. These are useful for authors, reviewers, and editors: they can identify unintentional similarities and ensure adherence to ethical writing practices.1315

Potential, Risks, and Challenges Using AI-At In Medical Manuscript Writing

Potential

Undoubtedly, AI-AT is a valuable tool in medical manuscript writing and is revolutionizing biomedical research and publication. It increases efficiency with quick and high-quality text generation, decreases time and cost usually needed in research and publication, and gives authors more time and better opportunities to refine and check their manuscripts. The result is better text production in less time with improved productivity. They offer researchers more editing tools to improve text clarity and customize it to the target audience, especially by utilizing translation instruments. Consequently, they make the research information more accessible with authors’ international reach.

In addition, AI-AT is useful in grammatical, typographical, and syntax identification and revision, with excellent summarization efficiency, creating a clear and concise manuscript that enhances the credibility and impact of the research findings.16

These tools can also be valuable aids for reviewers and editors. It was common for editors and publishers to use plagiarism checker software like IThenticate®, launched by Turnitin®. It uses AI technology to detect potential plagiarism in written work. It employs machine learning algorithms to compare submitted content against a vast database of existing publications and identify any similarities. In April 2023, Turnitin’s AI writing detection capabilities launched across many of our integrity solutions—a milestone in combating the improper use of AI writing tools, such as ChatGPT.17

They are also helpful during all the article processing, from editorial assessment, reviewers’ assignments according to the topic, keywords, expertise, peer-review, revision, copyediting, and proofreading to final manuscript publications. Clarivate’s Reviewer Locator, Reviewer’s Discovery, and Elsevier’s EVISE automatically suggest reviewers based on available peer-review databases and connect to the journal submission management system, integrating editorial and peer-review processes.18

Notably, while these AI technologies can be valuable aids in medical manuscript writing, they should not replace human authors’ and researchers’ expertise and critical thinking.

Risks

Unfortunately, AI-AT use is marked by significant risks: inaccuracies, AI hallucinations, research fabrication, and data falsifications. These risks can be divided into non-intentional mistakes and intentional errors/frauds.

With the capacity to generate fake realistic data and develop scientifically convincing texts, AI-AT can considerably threaten authentic biomedical research, especially with the lack of AI-detection technologies and the shortage of reviewers and editors with expertise in AI-generated frauds and errors.19,20

AI hallucinations, especially when generating nonexistent data and references, data inaccuracies, and biases, are the most important part of non-intentional errors due to AI-AT. They are usually related to the absence of training and expertise in AI engineering prompting, the use of outdated data, and the lack of human expert verification of AI-generated results and texts.

AI-AT-based research fabrication and data falsification are often intentionally driven by the search for financial gain, potential fame, academic progression, and curriculum vitae boosting.20

Despite its capacity to improve the competitiveness and the quality of biomedical research, AI-AT risks altering the equity of opportunities among researchers using or not using these tools, as it can sabotage and weaken legitimate research.20

As AI-generated fraudulent research is difficult to detect, there is a greater risk of utilizing its outcomes to adopt new biomedical guidelines, healthcare policies, standards of care, and interventional therapeutics, which can be highly costly in time, money, and mainly in lives.20

Challenges

Considering the AI-AT’s potential and risks in manuscript medical writing and biomedical research, it is crucial to assess them and frame their uses according to scientific and ethical guidelines.

Authors must follow the new ICMJE updates and the specific journal guidelines.1 Journals must develop new authors, reviewers, and editor guidelines considering the ICMJE updates that recommend journals require authors to disclose any AI-AT use in any step of manuscript production (large-language models [LLMs], chatbots, image creators). Importantly, the cover letter and the manuscript should mention these disclosure details.

AI-AT cannot be an author or co-author because it lacks responsibility for accuracy, integrity, and originality. Human authors must assume full responsibility for AI-AT-generated content during any step of the manuscript’s production. Authors are forbidden to cite AI as an author, co-author, or reference in their articles. The authors must provide proper attribution of all used materials, including AI-AT-generated text and images.

Transparency and human oversight are essential when using AI-AT in biomedical research. Any manuscript authorship and responsibility is entirely lying to human authors.1 Journals should establish clear communication channels between authors, reviewers, and editors using a unified and centralized system for document sharing, feedback, and version control. They must provide authors, reviewers, and editors with clear guidelines about the assessment of the manuscripts and the criteria for acceptance and rejection of submission. Reviewers and editors should disclose any conflict of interest and provide transparent, detailed, constructive, and timely feedback.

By implementing collaboration platforms, journals have an opportunity to enhance a cooperative environment between all the publication actors. These online tools facilitate collaboration on research projects by helping researchers manage tasks, track progress, and share documents and feedback. These data can help AI models learn the nuances of scientific writing and generate more accurate and relevant content.

Editors and reviewers must update their knowledge of the new AI-AT applications in medical manuscript writing to be able to detect legitimate and fraudulent research. Accordingly, each journal needs to provide adequate training and workshops for the editorial board to improve their comprehension of AI-AT, their potential, and their risks and help them to upgrade their expertise and skills and assist them with efficient tools that can detect any AI hallucinations, inaccuracies, biases, and frauds.

This strong collaboration can improve manuscript quality by ensuring clarity, conciseness, and adherence to journal guidelines. It can enhance accuracy and reliability by choosing adequate methodology, minimizing errors and biases in research findings, and adopting proper data handling and reporting.

This collaboration can increase efficiency and productivity through clear and transparent communication and well-defined roles, thus streamlining the review and publication process and saving time and resources for all parties involved.

A transparent and collaborative research environment fosters trust in the scientific community and the public. Knowing that manuscripts have undergone rigorous expert scrutiny increases confidence in the research findings.

Finally, this collaborative research work can improve training data as high-quality, collaboratively produced manuscripts can provide valuable training data for AI writing tools.

In a robust collaborative environment, researchers can not only produce high-quality and trustworthy biomedical research but also pave the way for the effective integration of AI-assisted writing tools in the medical manuscript writing process, catalyzing faster advances in biomedical research and better healthcare outcomes.

The use of AI-AT in medical writing is a source of other ethical issues. These include misinformation, privacy concerns, lack of transparency, job displacement, deteriorating human creativity, increasing plagiarism, altering authorship, and creating human dependence on this technology.21

With the development of AI-AT in medical writing, new AI text content detectors were created: openAI, Writer, Copyleaks, GPTzero, and CrossPlag. A recent study published in September 2023 in the International Journal for Educational Integrity evaluated the effectiveness of various AI content detection tools in distinguishing between human-written and AI-generated text. The study used paragraphs generated by ChatGPT Models 3.5 and 4, as well as human control responses, and applied AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag. The findings suggest that the tools are more accurate in identifying content generated by GPT 3.5 compared to GPT 4. However, when applied to human-written responses, the tools exhibit inconsistencies and produce false positives. The study emphasizes the need for further development and refinement of AI content detection tools as AI-generated content becomes more sophisticated.22

Conclusions

It’s important to note that while these AI technologies can be valuable aids in medical manuscript writing, they should not replace the expertise and critical thinking of human authors and researchers.

Undoubtedly, the impact of AI-AT on biomedical research will grow. Rather than fearing its effects, we should develop frameworks and guidelines to utilize this technology better to improve clinical research and medicine. The medical research community must implement clear and robust strategies and develop adequate tools for bias, misinformation, plagiarism detection, privacy protection, and transparency guarantee.

Strong collaboration between authors, reviewers, and editors is the backbone for high-quality and trustworthy biomedical research. It is imperative to strike a balance between leveraging AI’s potential and upholding human creativity, critical thinking, and ethical considerations in medical writing.

Contributions

The author contributed to each phase of writing and revisions.

References

  1. ICMJE. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals [Internet]. 2023. Available from: https://www.icmje.org/icmje-recommendations.pdf [cited 1 December 2023].
  2. Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329:637–9. doi: 10.1001/jama.2023.1344
  3. IBM. What are AI hallucinations? [Internet]. Available from: https://www.ibm.com/topics/ai-hallucinations [cited 1 December 2023].
  4. Lone W, Ho J, Koussayer B, Sujka J. ChatGPT: friend or foe in medical writing? An example of how ChatGPT can be utilized in writing case reports. 2023. Available from: http://creativecommons.org/licenses/by/4.0/ [cited 15 November 2023].
  5. Vintzileos AM, Chavez MR, Romero R. A role for artificial intelligence chatbots in the writing of scientific articles. Am J Obstet Gynecol. 2023 Aug 1;229(2):89–90. doi: 10.1016/j.ajog.2023.03.040
  6. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA J Am Med Assoc. 2016;316(22):2402–10. doi: 10.1001/jama.2016.17216
  7. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets [Internet]. Available from: http://www.github.com/goodfeli/adversarial [cited 22 November 2023].
  8. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019 Apr 4;380(14):1347–58. doi: 10.1056/NEJMra1814259
  9. van Dinter R, Tekinerdogan B, Catal C. Automation of systematic literature reviews: a systematic literature review. Inf Softw Technol [Internet]. 2021;136:106589. doi: 10.1016/j.infsof.2021.106589
  10. Portenoy J, West JD. Supervised learning for automated literature review. CEUR Workshop Proc. 2019;2414:83–91.
  11. Zhang Y, Liang S, Feng Y, Wang Q, Sun F, Chen S, et al. Automation of literature screening using machine learning in medical evidence synthesis: a diagnostic test accuracy systematic review protocol. Syst Rev. 2022;11(1):5–11. doi: 10.1186/s13643-021-01881-5
  12. Boehringer Ingelheim and IBM collaborate to advance generative AI and foundation models for therapeutic antibody development [Internet]. 2023. Available from: https://www.boehringer-ingelheim.com/partnering/human-health-partnering/partnership-ibm-accelerate-new-antibody-therapies [cited 1 December 2023].
  13. Zimba O, Gasparyan AY. Plagiarism detection and prevention: a primer for researchers. Reumatologia. 2021;59(3):132–7. doi: 10.5114/reum.2021.105974
  14. Kolhar M, Alameen A. University learning with anti-plagiarism systems. Account Res. 2021 May 19;28(4):226–46. doi: 10.1080/08989621.2020.1822171
  15. Memon AR, Mavrinac M. Knowledge, attitudes, and practices of plagiarism as reported by participants completing the AuthorAID MOOC on research writing. Sci Eng Ethics. 2020 Feb 17;26(2):1067–88. doi: 10.1007/s11948-020-00198-1
  16. Diaz Milian R, Moreno Franco P, Freeman WD, Halamka JD. Revolution or peril? The controversial role of large language models in medical manuscript writing. Mayo Clin Proc. 2023 Oct 1;98(10):1444–8. doi: 10.1016/j.mayocp.2023.07.009
  17. Young L. iThenticate 2.0: advancing research integrity with AI writing detection [Internet]. 2023. Available from: https://www.turnitin.com/blog/ithenticate-2-0-advancing-research-integrity-with-ai-writing-detection [cited 1 December 2023].
  18. Kousha K, Thelwall M. Artificial intelligence to support publishing and peer review: a summary and review. Learn Publishing; 2023 Aug 8;37(1):4–12. doi: 10.1002/leap.1570
  19. Naddaf M. ChatGPT generates fake data set to support scientific hypothesis. Nature. 2023 Nov;623(7989):895–896. doi: 10.1038/d41586-023-03635-w
  20. Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns. 2023;4(3):100706. doi: 10.1016/j.patter.2023.100706
  21. Doyal AS, Sender D, Nanda M, Serrano RA. Chat GPT and artificial intelligence in medical writing: concerns and ethical considerations. Cureus. 2023;15(8):e43292. doi: 10.7759/cureus.43292
  22. Elkhatat AM, Elsaid K, Almeer S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int J Educ Integr. 2023;19:17. doi: 10.1007/s40979-023-00140-5

Copyright Ownership: This is an open access article distributed in accordance with the Creative Commons Attribution Non-Commercial (CC BY-NC 4.0) license, which permits others to distribute, adapt, enhance this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0.