fbpx

How to spot fake videos: Don't get fooled

Fake videos, deepfakes and other digital manipulations are a real threat to all of us today. The time when we could trust what we see on video is definitely over. With the advent of AI tools like Sora, Heygen or Synthesia, videos are being created that look very real – but they are not. They often pretend to be created by a trusted source or to depict a famous personality.

As with other cybersecurity topics, the user is the weakest link in the chain, and basic knowledge is the key to safety. How do you know if it's a scam? And what can you do about it, even if you're not an IT guy?

Don't believe everything you see.

Deepfake technology can create a video in which, for example, Tom Cruise flips a coin (even though he never did it) or a politician calls for something he never said. AI tools today can put a foreign face in a foreign context, generate a voice that sounds like the original, and all this in a matter of minutes. It has no problem even with entire scenes or generating videos from photos in a way that the average user would have difficulty detecting. The consequences can be serious – from spreading misinformation to damaging the reputation of specific individuals.

Fake video samples

How to spot a fake video – 5 basic questions

You can spot AI videos by the finer details. Often, these aren’t obvious mistakes. They’re more like little things – strange eye movements, a “stuck” face, a mismatch between facial expressions and voice. When watching a suspicious video, try to focus on these 5 areas:

  • Image – Is the lip movement natural? Is the person blinking normally? Is the face blurry around the edges? Pay attention to facial details: AI manipulations often fail in the details – for example, eye movement, mouth movement, or light reflections on the skin.
  • Sound – Does the voice sound robotic or too clean? Does the tone match the emotions you see? Look for inconsistencies in the audio. A mismatch between lip movement and audio is a common sign of a fake video. Or check the background. AI tends to have trouble accurately rendering backgrounds or moving objects.
  • Contents – Doesn't the video make much sense? Isn't it too dramatic or shocking?
  • Source – Can you find the same video elsewhere? For example, on ČT24, BBC or Reuters? Verify the source: As with other types of content, the credibility of the source is essential.
  • Context – Is the video up-to-date? Isn't it a shot taken from another environment?

Tools to detect fake videos

According to sources from the University of Munich, advanced algorithms such as FaceForensics++ can detect manipulations with an accuracy of up to 78 % – which is significantly more than the average user, who only has a success rate of around 50 %. 

So if you are not sure, try using one of the following tools:

What to do if someone sends you a suspicious video

  • Ask yourself: Why is this person sending me this?
  • Try to find the video on trusted platforms.
  • Upload a frame from the video to an AI assistant (Gemini, ChatGPT, etc.) and have it analyzed.

No need to panic, just be vigilant

The goal isn't to become paranoid. Rather, it's to adopt simple habits that will protect you. Just as we learned not to click on every email from "the bank," we can learn to recognize strange videos. And don't worry—you don't have to be a hacker, just use common sense.

Are you interested in this topic? If so, follow my blogfor more practical tips on how to stay safe in the digital world. Or check out our course offerings.