You'd think scientists would know better.

Red Handed

A paper published in the journal Physica Scripta last month became the subject of controversy after Guillaume Cabanac, a computer scientist and integrity investigator, noticed that the ChatGPT query to "Regenerate Response" had been copied into the text, seemingly by accident.

Now, the authors have fessed up to using the chatbot to help draft the article, becoming the latest testament to generative AI's worrying inroads into academia.

"This is a breach of our ethical policies," Kim Eggleton, head of peer review and research integrity at IOP Publishing, which publishes Physica Scripta, told Nature.

The paper has now been retracted for not declaring its use of the chatbot — which, depending on your point of view, is either a storm in a teacup or a sign of the future of academia.

Peer Review Paladin

Since 2015, Cabanac has undertaken a sort of crusade to uncover other published papers that aren't upfront about their use of AI tech, which back then was little more than a curiosity. As computers have gone from spitting out veritable gibberish to convincing, human-like compositions, the fight has gotten harder. But this has only steeled the resolve of Cabanac, who's helped uncover hundreds of AI-generated manuscripts.

"He gets frustrated about fake papers," Cyril Labbé, a fellow computer scientist and Cabanac's partner in crime-fighting, told Nature last year. "He's really willing to do whatever it takes to prevent these things from happening."

Those careful to cover their tracks won't leave behind obvious clues like, "as an AI language model," though thankfully for sleuths like Cabanac, many still do. He recently uncovered another paper, published in the journal Resources Policy, that contained several of those braindead giveaways. The publisher is "aware of the issue," it told Nature this week, and is investigating the incident.

Beyond that, AI models often can jumble the facts, and may simply be too dumb to accurately regurgitate the math and technical language involved in scientific papers — like in the Resources Policy study, which contained nonsensical equations, Cabanac found.

ChatGPT can also produce false claims out of thin air, in a phenomenon perhaps too generously described as "hallucinating." Case in point, a preprint paper last week was also outed as partially AI-generated after a Danish professor noticed that it cited papers under his name that didn't exist.

Overwhelming Numbers

Given how rigorous the peer review process is — or at least should be — it's alarming that AI-made phonies are slipping through the cracks.

Maybe not everyone has caught on. The ubiquity of the technology is still recent, after all. Or, says researcher and fake paper sleuth David Bimler, peer reviewers simply don't have time to look for stuff like that.

"The whole science ecosystem is publish or perish," Bimler told Nature. "The number of gatekeepers can't keep up."

And that may be the bitter truth. It takes a lot of time and expertise to review papers, but it only takes a few minutes for an AI churn one out, however shoddy it may be.

More on AI: Every Single State’s Attorney General Is Calling for Action on AI-Generated Child Abuse Materials


Share This Article