Researchers Develop and Test AI Text-Detector Tools

According to researchers at UMD existing AI text-detectors are susceptible to errors that comprise their efficiency at the most crucial of moments
Descriptive image for Researchers Develop and Test AI Text-Detector Tools

The explosion of open-sourced artificial intelligence (AI) tools that use deep learning techniques to generate text, also known as large language models, has raised alarms with government officials, content creators and educators in light of how easy the technology can be abused for plagiarism, deception and misinformation.

A number of AI detectors have been released in response to these concerns, but none of them are sufficiently reliable in practical scenarios, says Soheil Feizi, an associate professor of computer science with a joint appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS).

Existing AI text-detectors are susceptible to two types of errors, he explains. Sometimes the AI- generated text can go by undetected; or, a false positive occurs when a human’s text is incorrectly identified as AI-generated. Feizi finds the latter especially concerning in educational settings because it’s more likely to happen if English is a student’s second language.

In a study that was publicized by The Washington Post, New Scientist, and The Register, Feizi and his graduate students proved that the majority of existing detectors have a high false-positive rate and can be easily evaded.

“You can never reliably say that this sentence was written by a human or some kind of AI, because the distribution between the two types of content is so close to each other,” he says. “It’s especially true when you think about how sophisticated these large language models are becoming.”

Even though Assistant Professor of Computer Science Furong Huang agrees with Feizi that existing detectors are imperfect, she believes that the key to improving models is providing more data for them to learn from.

Her team’s paper explores how much additional training data is required to advance a detector’s capabilities. It also shows that a holistic approach of analyzing entire paragraphs or documents opposed to a single sentence increases the accuracy of detection.

However, Huang acknowledges that as detectors become more sophisticated, so will the strategies to evade them.

“It’ll be like a constant arms race between generative AI and detectors,” she says. “But we hope that this dynamic relationship actually improves how we approach creating both the generative large language models and their detectors in the first place.”

Tom Goldstein, a professor of computer science and director of the University of Maryland Center for Machine Learning, prefers an alternative detection strategy called watermarking because it’s extremely unlikely to make false accusations.

While watermarking is more commonly known in the context of image copywriting, the concept is still similar with large language models. By embedding an invisible signal into the text as it’s being generated, the detection algorithm knows exactly what to look for.

“This makes watermark detection much easier and far more accurate than other types of models,” says Goldstein, who also has an appointment in UMIACS.

In a study that recently won an Outstanding Paper Award at the International Conference on Machine Learning, Goldstein and his coauthors showed that watermarks are a practical tool for combating malicious uses of generative AI models because false positive detections are statistically improbable.

His tweet about the team’s results went viral and has been viewed more than 1.3 million times, prompting interviews and articles in The New York Times, Wired, Communications of the ACM, and more.

However, he acknowledges that watermarking could only be successful in real-world scenarios if developers and companies like OpenAI cooperate on a standard implementation.

Making AI detectors more reliable and trustworthy is just one of many examples that need further research and attention from machine learning experts, says Goldstein, who is also a co-principal investigator of the new Institute for Trustworthy Law & Society (TRAILS).

Funded by a $20 million award from the National Science Foundation and the National Institute of Standards and Technology, TRAILS is cross-campus collaboration aimed at influencing fairness and trustworthiness in AI by incorporating both users and stakeholders in the development process. The award is being managed by UMIACS, who is also providing technical and administrative support for TRAILS.

In collaboration with Morgan State University and George Washington University, UMD researchers are investigating what trust in AI looks like, how to create technical AI solutions that build trust, and which policy models are effective in sustaining trust.

“We have to accept that these tools now exist and that they’re here to stay,” says Feizi, who is also a member of TRAILS. “There’s so much potential in them for fields like education, for example, and we should properly integrate these tools into systems where they can do good.”

—Story by Maria Herd, UMIACS communications group

Part of this story originally appeared in “Is AI-Generate Content Actually Detectable?” published by the College of Computer, Mathematical, & Natural Sciences (CMNS) in May 2023.

The paper “Can AI-Generated Text be Reliably Detected?” was coauthored by Feizi and computer science graduate students Sriram Balasubramanian, Vinu Sankar Sadasivan, Aounon Kumar, and Wenxiao Wang.

The paper “On the Possibilities of AI-Generated Text Detection” was coauthored by Huang; Research Scientist Amrit Singh Bedi; computer science doctoral students Souradip Chakraborty, Sicheng Zhu and Bang An; and Distinguished University Professor of Computer Science Dinesh Manocha.

The award-winning paper, “A Watermark for Large Language Models,” was coauthored by Goldstein; postdoctoral scholar Jonas Geiping; doctoral students John Kirchenbauer and Yuxin Wen; and Professor Jonathan Katz and Assistant Professor Ian Miers, who both have appointments in the Department of Computer Science and UMIACS.

Feizi, Huang, Goldstein and Manocha are all core members of the University of Maryland Center for Machine Learning, which is supported by UMIACS, CMNS and technology and financial leader Capital One.

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.