Though in many cases deepfakes manifest as harmless memes or clever marketing campaigns, deepfake technologies are a growing cultural, political, economic, social, and business risk with the power to cause harm.
The implications of deepfakes are disturbing, from spreading disinformation and inflicting reputational damage on political and public figures, to corporate espionage, and cyberattacks. Dedicated deepfake communities and sites are proliferating that even enable consumers to commission custom deepfakes.
Deepfakes are images, videos, and audio that are convincingly real but are actually AI-manipulated fabrications. Deep learning, machine learning (ML), and artificial intelligence (AI) technologies are used to create fake content, like superimposing a celebrity’s face onto another person’s body so they say or engage in fictional things with the purpose of deceiving viewers.
Deepfake technologies are increasingly sophisticated and enable criminals to alter the context of the narrative that’s being told, which can detract from the authenticity of information that we’re presented with online. With deepfakes doubling approximately every six months, the question of how to identify deepfakes is becoming more critical.
How Is a Deepfake Created?
Deepfake videos are often made using a variational auto-encoder (VAE) and a facial recognition algorithm. Images are encoded into low-dimensional representations, which are then decoded back into images by trained VAEs.
In practice, it would look something like this hypothetical example:
- A person wants to make a deepfake video of a famous entertainer for a Super Bowl ad
- The person uses an auto-encoder that’s trained on images of the entertainer’s face and another that’s trained on diverse facial images
- The training sets for each auto-encoder can be selected by deploying a facial recognition algorithm on videos that captures various postures and lighting environments
- Following the training, the two separate encoders are combined to depict a realistic video of the entertainer’s face on another individual’s body
Industry Progress to Detect Deepfakes
According to Kaggle, determining manipulated media is a technical challenge that demands cross-industry collaboration. Research-driven initiatives have been circulating in recent years that aim to automatically detect various manifestations of deepfakes, which are often immensely hard for humans to identify.
The DeepFake Detection Challenge (DFDC), a competition created by AWS, Microsoft, Facebook, the Partnership on AI, and academics, was run on Kaggle and offered a $1 million prize to global researchers who could develop innovative technologies to aid in detecting Deepfakes and manipulated media. It garnered over 2,000 participants and generated over 35,000 deepfake detection models.
Detect Fakes is a MIT research initiative that strives to pinpoint methods to counteract AI-generated misinformation, and features videos that prompt participants to practice if they can discern a DeepFake from a real video.
Researchers from UC Berkeley and Stanford created an AI-driven approach to detect lip-sync technology, which is able to identify 80 percent of fakes by understanding the misalignment between the shapes of people’s mouths and the sounds they make when they speak.
Microsoft released a commercial deepfake detection tool which analyzes video frames and generates a software confidence score indicating if the frame is real or AI-produced. Notably, it was made accessible to various companies who monitored the 2020 U.S. elections.
Research teams from Intel and the Graphics and Image Computing lab at Binghamton University developed a tool that uses biological signals and data to identify and classify deepfakes with 96 percent accuracy. The tool is based on the idea that while facial videos can be synthesized, subtle physiological signals like heart rate fluctuations and blood flow that exhibit as pixel color changes, can’t be easily reproduced.
Though innovations are emerging to potentially identify deepfakes, most remain in the research or development stages, and some authorities even caution that there might not be a long-term, technically-driven solution for deepfakes.
Are you an AI and Machine Learning enthusiast? If yes, the AI and Machine Learning course is a perfect fit for your career growth.
Just as AI is being used to create deepfakes, it’s also a potential tool to detect them and combat the negative and unethical effects of malicious deepfake technologies. As deepfakes become increasingly common, this will be key in mitigating the risks that can arise from manipulated data.
For information about new developments and career opportunities in artificial intelligence and machine learning technologies, check out Simplilearn’s AI and Machine Learning courses that can provide you with the skills you need to become part of this exciting industry.