How deepfake detection is shaping the future of AI
Even though deepfakes are a challenge, the future of AI looks incredibly bright, a world where AI helps people create amazing art movies or even discover new solutions to complex problems.
Imagine a future where artificial intelligence (AI) helps you in amazing ways every day! From helping you write stories to showing realistic images and videos that look like the real world, AI is doing things we once thought only humans could do. It's like magic! But there’s more—it can learn, adapt, and get better with every task.
What if you could ask a computer to make a video of something that never happened? Or create smart assistants that can talk just like your grandparents? While this sounds fun, it can also be a little scary when people use these tools to trick us.
Deepfakes are fake videos, pictures, or even voices made by deep learning models of Artificial Intelligence that look and sound real. What will you do if your fake video is uploaded to defame you?
Evolution of deepfakes
Fakeness has come a long way, from manual editing to AI-based synthetically generated voices, images, and video. At first, AI could only swap faces in pictures or videos. Later, it became smart enough to change voices using "speech-to-text" and voice AI. Now, with language models (like those used in chatbots), AI can even make fake conversations! Moreover, some models can swap faces in live video calls. If you get a video call from someone you trust, but it’s not really them talking—how will you figure out what is fake and what is real?
Challenges in deepfake detection
As deepfake technology advances, it becomes difficult to spot the fakes because they are more realistic. A new type of AI technology called diffusion models (DMs) is making this even more challenging. Unlike older methods like GANs (Generative Adversarial Networks), DMs create very realistic photos and videos, making it tougher to detect what’s fake. Researchers now have to find new ways to spot deepfakes generated by these models since they behave differently and have unique characteristics.
Another big challenge is that detecting deepfakes takes a lot of computing power. For example, analysing a high-quality video with AI takes much longer than just watching it, which makes real-time detection very difficult.
On top of that, balancing detection methods with privacy concerns is tricky. Some people worry that aggressive deepfake detection could accidentally violate privacy or wrongly accuse innocent people of creating fake content, which has happened in some court cases.
As per The Guardian, a well-known case involved a woman in Pennsylvania, US who was accused of faking an incriminating video of teenage cheerleaders to harm her daughter's rivals. She was arrested, publicly outcast, and condemned for allegedly creating a malicious deepfake. However, after further investigation, it was revealed that the video was never altered in the first place—the entire accusation was based on misinformation. This case highlighted the risk of misidentifying real content as fake. Lawyers are also claiming real videos as deepfakes to save their clients.
The solution: Detecting deepfakes
Researchers are working hard to find ways to spot deepfakes! AI detectives can now look closely at videos, frame by frame, to find tiny mistakes that reveal if a video is fake. They check for things like weird eye movements or changes in lighting that don’t seem natural, in summary, checking physics-defying features. Some AI systems are even trained to look at how sound matches the person’s lip movement in a video.
In fact, there have already been successful cases where deepfakes were caught.
For example, as per TOI, in a recent case, a man fell victim to a deepfake trap where AI-generated explicit videos were created using his likeness. The perpetrators blackmailed him, threatening to leak the fake videos unless he paid them. The victim was so distraught that he almost took his own life before reporting the crime. This case is one of the first of its kind in India and highlights the devastating personal impact deepfakes can have when used maliciously.
Similarly, some deepfake videos of celebrities and political leaders were exposed because AI could spot the fakes before most people noticed anything wrong. Some of the promising detection tools are:
- Sentinel: Focuses on analysing facial images for signs of manipulation.
- Attestiv: Uses AI to analyse facial images and find fakes.
- Intel’s real-time deepfake detector (FakeCatcher): Detects deepfakes in videos in real-time.
- WeVerify: This tool analyses social media images for signs of manipulation.
- Microsoft’s Video Authenticator: Can check both images and videos for deepfakes.
- FakeBuster: employed screen recording of video conferences for training, a tool from the Indian Institute of Technology (IIT) Ropar in 2021, verifies the authenticity of people in video calls.
- Kroop AI's VizMantiz is a multimodal deepfake detection framework for the banking, financial, and insurance sectors and social media platforms developed by a Gujarat-based Indian startup.
How academic institutions and companies are helping
Many tech companies are jumping in to help detect deepfakes.
Big names like Facebook, Google, and Microsoft are creating tools that can scan videos on their platforms to find fakes before they spread. Microsoft’s video authenticator is one example. SynthID from Google identifies and watermarks AI-generated content. These companies are also working with researchers to make AI better at catching deepfakes faster.
Further, the Massachusetts Institute of Technology (MIT) launched a website for detecting fake videos, which employs artefact detection using facial analysis, audio-video synchronisation, and audio analysis.
The role of governments
Governments are stepping in to help protect people from the risks posed by deepfakes. In 2018, the US passed the Malicious Deep Fake Prohibition Act, which punishes individuals who use deepfakes to cause harm. Many other governments are also working on laws and policies to make it more difficult for people to create misleading fake videos.
However, there’s an important balance to strike. Video generation and face-swapping technologies are tools—they can be used or misused. Instead of banning these advancements, governments should focus on punishing those who misuse them for malicious purposes while encouraging the development of beneficial applications of the technology. Additionally, governments are considering rules that require companies to label AI-generated content clearly. This way, the public can immediately know whether what they’re seeing is real or artificial.
Governments also have a key role in public awareness. They can help educate people to always be cautious and to “verify first” before believing or sharing suspicious videos. By working closely with tech companies and research institutions, governments can ensure that deepfake detection tools are safe, effective, and responsibly used to safeguard public trust and media integrity.
Way forward: Bright future of AI
Deepfakes are just one small problem in the vast ocean of AI challenges. As AI continues to evolve, new hurdles will emerge, but so will new opportunities. The application of AI is advancing at such a pace that it could lead humanity into the next stage of evolution. By addressing issues like deepfakes head-on, we equip ourselves to handle similar challenges that will undoubtedly arise in the future.
Even though deepfakes are a challenge, the future of AI looks incredibly bright, a world where AI helps people create amazing art movies or even discover new solutions to complex problems. If we can learn how to manage the dangers posed by its misuse, like deepfakes, AI will continue to enrich our lives in exciting and transformative ways.
In the end, as AI grows, we must use it for good. If we do that, AI will help us reach a future filled with possibilities we can’t even fathom.
(Rahul Prasad is Co-founder and CTO of Bobble AI, an AI keyboard platform.)
Edited by Kanishk Singh
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)