Inventions of the Year: Deepfake Detection Invention Discerns Between Real and Fake Media

Invention of the Year nominee could help prevent spread of deepfake media and misinformation.

This article is republished from research.umd.edu

It has been said that “seeing is believing,” but in the age of social media, viral videos, and artificial intelligence (AI) technology, can we truly believe what we see on the internet? Computer science researchers at the University of Maryland have invented a “Deepfake Detection Tool” to help answer that question.

Advances in computer vision and learning have enabled the creation of sophisticated and convincing forgeries of images and videos known as deepfakes. These falsified media are often used maliciously to spread misinformation or commit acts of fraud and cybercrime. A November 2020 report by Sensity, a software development company, estimated that approximately 85,000 harmful deepfake videos were detected, and that the number was expected to double every six months. 

Notable examples of deepfakes include a public service announcement from Barack Obama, co-created by Jordan Peele and Buzzfeed, to prove that you cannot trust everything you consume online, as well as a forged Instagram video featuring Meta CEO Mark Zuckerberg “admitting” that Facebook’s true intent is to manipulate users. Deepfakes pose a significant threat to politics, with the potential to manipulate elections, alter political narratives, weaken the public’s trust in a country’s leadership, and increase hatred among various social groups. Deepfakes can also be used to generate non-consensual pornography.

Paul Chrisman, Iribe Professor of Computer Science and Electrical and Computer Engineering, and Distinguished University Professor Dinesh Manocha, along with Ph.D. students Trisha Mittal, Uttaran Bhattacharya and Rohan Chandra, and Research Assistant Professor Aniket Bera, have invented a method that can detect such deepfake media, and in turn, help prevent the spread of misinformation and fraud. The researchers’ deepfake detection tool analyzes affective cues, such as eye dilation, raised eyebrows, speaking volume, pace, pitch and tone. This method is unlike its predecessors because it incorporates inputs from both audio and video, rather than a single input. Together, these cues provide complementary information to develop stronger inferences about the video in question. The relationship between affective cues from the audio and cues from the same video are analyzed to see if there is a strong correlation, which is used to determine if a video is “real” or “fake.”

The Deepfake Detection Tool was recognized as a finalist for an Invention of the Year Award in the Information Sciences category. The awards were held on May 3, 2022, at Innovate Maryland, a campus-wide celebration of innovation and partnerships at the University of Maryland.

More information about this invention can be found at: https://go.umd.edu/deepfakedetection

The researchers’ work was supported in part by the Army Research Office.

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.