LAY LANGUAGE
You can’t always believe what you see on a video. Sometimes fraudulent videos are created to damage reputations, spread misinformation, and undermine trust. Purdue University researchers have created and validated technology to better identify deepfakes.
PROBLEM
AI-generated video content of individuals, known as deepfakes, are created with the intention to damage reputations, spread misinformation and undermine trust in institutions.
SOLUTION
Purdue University researchers have developed technology to identify deepfakes by analyzing inconsistencies between lip movements and speech audio. The researchers use a multimodal — audio and visual — approach to achieve robust and reliable detection of deepfakes and determine the legitimacy of a video.
The technology has been validated through training and testing of the model on DeepFake Detection Challenge (DFDC) data set, including over 60,000 training videos and 40,000 validation videos. The model was also tested on real world videos from the Internet. The accuracy of the model was 98%. It has applications across traditional and social media as well as cybersecurity industries.
PRIMARY INVESTIGATOR
Aniket Bera, College of Science
INNOVATION DISCLOSURE
Learn More
LICENSING CONTACTS
Email: otcip@prf.org
MEDIA CONTACT
Email: Steve Martin // sgmartin@prf.org
The Convergence Center for Innovation and Collaboration 101 Foundry Dr, West Lafayette, IN 47906, 765-588-3470
Trouble with this page? Please contact Purdue Research Foundation by Phone, 765-588-3470, or FAX, 765-463-3501.