AI Detection Tools are having trouble Detecting Cheaters

Simply paraphrasing LLM-generated content can often deceive detection techniques used by leading market technologies in the field
Descriptive image for AI Detection Tools are having trouble Detecting Cheaters

To many, AI detection tools offer a glimpse of hope against the erosion of truth. They promise to identify the artifice, preserving the sanctity of human creativity.

However, computer scientists at the University of Maryland put this claim to the test in their quest for veracity. The results? A sobering wake-up call for the industry.

Soheil Feizi, an associate professor of computer science at UMD, revealed the vulnerabilities of these AI detectors, stating they are unreliable in practical scenarios. Simply paraphrasing LLM-generated content can often deceive detection techniques used by Check For AI, Compilatio, Content at Scale, Crossplag, DetectGPT, Go Winston, and GPT Zero, to name a few.

“The accuracy of even the best detector we have dropped from 100% to the randomness of a coin flip. If we simply paraphrase something that was generated by an LLM, we can often outwit a range of detecting techniques,” Feizi said.

Click HERE to read the full article

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.