This post is also available in: עברית (Hebrew)
In the age of AI and the growing concern in academia regarding AI-generated essays, a team of researchers from the University of Kansas has reassuring news- an AI text detector for scientific essays that can distinguish between human-written and computer-generated content almost 100% of the time.
In their study, they noted that while there are currently several AI detectors available, none of them can be applied to scientific papers. Professor Heather Desaire said that “most of the field of text analysis wants a really general detector that will work on anything,” so her team focused on reports written specifically for chemistry scientific journals.
According to Techxplore, the team’s detector was trained on journals published by the American Chemical Society. They collected 100 introductory passages written by professionals, then programmed ChatGPT to write its own introductions based either on journal abstracts or simply on the titles of reports.
When the detector scanned the three categories of reports, it correctly identified the human-authored passages 100% of the time, as well as reports generated from prompts only including report titles. The results were also very good with reports relying on introductory passages, with correct identification 98% of the time.
The competition currently does not compare, with competitor ZeroGPT doing poorly when it comes to science-related reports, and OpenAI failing to correctly identify authorship of essays an average of 80% of the time.
“This new detector will allow the scientific community to assess the infiltration of ChatGPT into chemistry journals, identify the consequences of its use, and rapidly introduce mitigation strategies when problems arise,” claims Desaire.
In response to the flood of AI-generated content, scientific journals are rewriting their rules regarding article submissions, with most banning AI-generated reports and requiring disclosure of any other AI processes used in composing a report.
To explain the possible risks of AI use in scientific journals, Desaire explains that their overuse may result in a flood of marginally valuable manuscripts. “They could cause highly cited papers to be over-represented and emerging works, which are not yet well known, to be ignored.”
She added that most concerning to her are the tool’s tendency to “hallucinate” and make up facts that are not true. Nevertheless, she remains optimistic and declares that editors must now take the lead in detecting AI contamination. “Journals should take reasonable steps to ensure that their policies about AI writing are followed, and we think it is fully feasible to stay ahead of the problem of detecting AI.”