Determining if video has AI-generated deepfake content 

March 25, 2024
As deepfakes proliferate, video sharing platforms are increasingly looking to detect and remove them. This Purdue technology brings together video and audio, elements that have previously been considered discretely, for more accurate deepfake detection.

Jacob Brejcha

Licensing Associate – Physical Sciences

LAY LANGUAGE
You can’t always believe what you see on a video. Sometimes fraudulent videos are created to damage reputations, spread misinformation, and undermine trust. Purdue University researchers have created and validated technology to better identify deepfakes.

PROBLEM
AI-generated video content of individuals, known as deepfakes, are created with the intention to damage reputations, spread misinformation and undermine trust in institutions.

SOLUTION
Purdue University researchers have developed technology to identify deepfakes by analyzing inconsistencies between lip movements and speech audio. The researchers use a multimodal — audio and visual — approach to achieve robust and reliable detection of deepfakes and determine the legitimacy of a video.

The technology has been validated through training and testing of the model on DeepFake Detection Challenge (DFDC) data set, including over 60,000 training videos and 40,000 validation videos. The model was also tested on real world videos from the Internet. The accuracy of the model was 98%. It has applications across traditional and social media as well as cybersecurity industries.

PRIMARY INVESTIGATOR
Aniket Bera, College of Science

INNOVATION DISCLOSURE
Learn More

LICENSING CONTACTS
Email: otcip@prf.org

MEDIA CONTACT
Email: Steve Martin // sgmartin@prf.org

Share:

keep exploring