To Make Better AI, Stop Tackling Problems ‘Whack-a-Mole’-Style, UMD Researcher Says
Since they became widely available three years ago, artificial intelligence chatbots have been embraced as research assistants, trip planners, writing helpers and more—but they’ve provided some face-palm moments as well.
One company’s bot developed a noticeable anti-Muslim bent, while another’s took to praising Hitler. And then there was the ill-fated attempt to reduce bias that resulted in depictions of, among other historical impossibilities, a rainbow coalition of U.S. Founding Fathers.
Although each problem was addressed, the trend has persisted, and there’s no reason to believe it will stop without thoughtful action, said Philip Resnik, a University of Maryland scholar whose research lies at the intersection of linguistics, computing and artificial intelligence.
“It’s what I call playing ‘whack-a-mole’ when you deal with these things as individual problems to solve,” said Resnik, a professor of linguistics with a joint appointment in the University of Maryland Institute for Advanced Computer Studies. “You fix it here, and then it pops up there, and maybe your corrective action created yet another problem.”
In a recent paper in Computational Linguistics, provocatively titled “Large Language Models Are Biased Because They Are Large Language Models,” Resnik argues that harmful biases in chatbots aren’t indicative of a flaw in the technology, but a reflection of the basic nature of the models these uncanny systems able to interact in unscripted, everyday language are based on.
In an interview with Maryland Today, Resnik explained how choices made in the development of large language models unintentionally led to the bias—which he said could have more far-reaching consequences than outrageous AI statements. But, he says, new choices could lead to AI tools that work for everyone.
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.