Researchers Say Current AI Watermarks Are Trivial to Remove
A traditional watermark is a visible logo or pattern that can appear on anything from the cash in your wallet to a postage stamp, all in the name of discouraging counterfeiting. You might have seen a watermark in the preview to your graduation photos, for example. But in the case of artificial intelligence, it takes a slight twist, as most things in the space usually do.
In the context of AI, watermarking can allow a computer to detect if text or an image is generated from artificial intelligence. But why watermark images to begin with? Generative art creates a prime breeding ground for the creation of deep fakes and other misinformation. So despite being invisible to the naked eye, watermarks can combat the misuse of AI-generated content and can even be integrated into machine-learning programs developed by tech giants like Google. Other major players in the space, everyone from OpenAI to Meta and Amazon, have pledged to develop watermarking technology to combat misinformation.
That’s why computer science researchers at the University of Maryland (UMD) took it upon themselves to examine and understand how easy it is for bad actors to add or remove watermarks. Soheil Feizi, an Associate Professor at UMD's Department of Computer Science, told Wired that his team’s findings confirm his skepticism that there aren’t any reliable watermarking applications at this point. The researchers were able to easily evade the current methods of watermarking during testing and found it even easier to add fake emblems to images that weren’t generated by AI. But beyond testing how easy it is to evade watermarks, one UMD team notably developed a watermark that is near impossible to remove from content without completely compromising the intellectual property. This application makes it possible to detect when products are stolen.
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.