Scientific Journals Confront Ethical Controversy Over ChatGPT
Ethical controversy over the new artificial intelligence (AI) tool ChatGPT is roiling the scientific community.1 “There are a range of ethical concerns in relation to the use of AI for journals. There is risk of bias, inaccuracy, and authorship issues,” says Mohammad Hosseini, PhD, a postdoctoral researcher in the department of preventive medicine at Northwestern University.
Multiple papers have listed ChatGPT as an author.2 Some scientists are trying to demonstrate how the AI tool could help researchers draft papers.3 “Using ChatGPT to write or rewrite specific parts of a manuscript without disclosure is a major concern,” Hosseini says.
In response, some journals are setting explicit standards, such as stating that no AI tools can be listed as a credited author on a research paper or requiring researchers to document use of AI tools in the methods sections of papers.4-6 “Once AI systems such as ChatGPT are used, authors’ claim to originality is watered down,” Hosseini adds.
AI cannot make a claim to copyrights. Inaccuracy is another concern. “We cannot know for sure whether authors would always go the extra mile to verify and substantiate information generated by these tools,” Hosseini says.
There are valid concerns scientists could pass off AI-written text as their own. “As AI systems become more sophisticated, detecting AI-generated text is becoming more complicated,” Hosseini says.
Some researchers may use AI tools to conduct various stages of a literature review. “While various tools and algorithms have been employed in searching, screening, and coding for a few years now, large language model-driven tools can take these to a whole new level,” Hosseini says.
As the body of scholarly knowledge expands, it is becoming almost impossible for a human to monitor and synthesize the literature without using AI tools. “But the more we use these systems to synthesize available knowledge, the more likely we are to be negatively affected by their in-built biases and have our understanding of the world affected by them,” Hosseini says. “We are in a difficult situation in that we cannot do away with AI systems, but we also do not want too much of them.”
There also is the potential for erosion of public trust if AI tools are used irresponsibly in research. “These are real concerns — and they could become more scary once AI tools are more integrated into various facets of research and used more often without proper human verification and oversight,” Hosseini warns.
Understandably, some people are wary of bad actors using AI tools for nefarious or unethical purposes. “That said, these tools can also be used in ways that benefit research efforts, and can offer a range of advantages,” Hosseini offers.
After seeing the wide range of experimentation with ChatGPT posted online, everything from school essays to sonnets about sepsis, Catherine A. Gao, MD, and colleagues informally tested the technology on writing biomedical abstracts. “We wanted to design a study to examine this more objectively and with quantifiable outcomes,” says Gao, an instructor in pulmonary and critical care at Northwestern University Feinberg School of Medicine.
Gao and colleagues asked four members of the biomedical sciences lab to read 10 research abstracts that were generated by ChatGPT and 10 real abstracts.7 The reviewers correctly identified 68% of ChatGPT-generated abstracts, but incorrectly identified 14% of the original abstracts as AI-generated. Reviewers, all used to reading scientific abstracts, reported it was surprisingly difficult to tell generated abstracts apart from real abstracts. “This indicates the technology is quite good at generating fluent and convincing text. Like all technology, this can be used in both a positive and negative way,” Gao says.
When used responsibly, AI tools can alleviate the burden of writing for busy scientists. The tools also can improve equity for scientists writing in a language that is not their native language. “There are concerns about bias inherent to these models and, of course, there are concerns if the technology is used irresponsibly without careful factual validation by an expert,” Gao notes.
What is unique about ChatGPT is its accessibility and ease of use, allowing it to reach a much wider audience than previous generative text models. “It’s a very interesting time to watch the scientific and academic communities decide on the boundaries of appropriate and optimal use,” Gao says.
Scientists might use AI to help with a literature review. “This is an ethical way to use AI tools, similar to the way we now accept the use of calculators or spell checkers, as long as the responsibility for checking the correctness of the produced text lies with the researcher,” asserts Elisabeth Bik, PhD, a scientific integrity consultant at San Francisco-based Harbers Bik LLC.
On the other hand, ChatGPT or similar tools could produce fake papers with the intention to mislead, rewrite an article written by another author, or produce a unique paper that will go undetected in plagiarism detection software. “Such a tool could also be fed with the description of a fabricated scientific experiment to produce a convincing-looking scientific paper,” Bik says.
ChatGPT can be used to generate unique text based on plagiarized input, or on text describing fake test results. “It might be one more technique that bad actors, such as fraudulent researchers or paper mill companies, could use to produce fake-but-convincing-looking papers,” Bik cautions.
Falsified papers also could be generated based on variations of one manuscript text. For example, a text describing the effect of microRNA-1 on gastric cancer could be rewritten by ChatGPT to make it look like a study of the effect of circular RNA 2 on colon cancer or long noncoding RNA 3 on cervical cancer.
To guard against these possibilities, researchers might need to spend more time trying to reproduce scientific findings to ensure the data are legitimate. This would take a combined effort from authors and scientific publishers to be willing to publish reproducibility studies.
“We are now at a point in time where we can no longer trust our gut or our eyes to know if data or scientific papers are real,” Bik says. “That is why we might have to slow down science a bit.”
REFERENCES
1. Liebrenz M, Schleifer R, Buadze A, et al. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit Health 2023;5:e105-e106.
2. Stokel-Walker C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023;613:620-621.
3. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health 2023;13:01003.
4. [No authors listed]. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023;613:612.
5. Hosseini M, Horbach SPJM. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. https://doi.org/10.21203/rs.3.rs-2587766/v1. [Preprint].
6. Hosseini M, Rasmussen LM, Resnik DB. Using AI to write scholarly publications. Account Res 2023;Jan 25:1-9.
7. Gao CA, Howard FM, Markov NS, et al. Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. doi: https://doi.org/10.1101/2022.12.23.521610. [Preprint].
The new artificial intelligence (AI) tool ChatGPT is roiling the scientific community. There are a range of ethical concerns in relation to the use of AI for journals. There is risk of bias, inaccuracy, and authorship issues.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.