(NEXSTAR) – Your aunt posted a cute video of bunnies jumping on a trampoline. A friend reposted a reporter interviewing someone who receives SNAP benefits. Actress Bette Midler posted about an Orlando crosswalk being repainted with Trump’s alleged birthday letter to Epstein.
It sounds like a pretty normal scroll through social media, but all of the aforementioned things are real examples of fake content.
With artificial intelligence tools improving and growing more accessible, it’s getting harder to know if you can trust your own eyes when it comes to videos in your feed. Want to get better at spotting AI? These are some tips from experts.
Pay attention to video quality and length
If you see grainy video, that’s a warning sign it might be AI-generated. Security camera and night vision videos are better at fooling people into thinking they’re real because the low resolution can help hide flaws that would otherwise giveaways. You might not notice how a person’s hair moves unnaturally if you can’t really see it on the supposed doorbell camera.
“The three things to look for are resolution, quality and length,” UC Berkeley political science professor Hany Farid told BBC. “For the most part, AI videos are very short, even shorter than the typical videos we see on TikTok or Instagram which are about 30 to 60 seconds. The vast majority of videos I get asked to verify are six, eight or 10 seconds long.”
AI-generating tools typically limit the length of videos users can make for free, hence the super-short clips. Shorter videos are also less likely to reveal imperfections.
Does it feel real?
When watching a video or looking at a picture made by AI, there may be an uncanny quality to it. Is someone’s skin too perfect and shiny like a doll? Are their movements slightly unnatural?
The camera may also be unusually steady. In one AI-generated video shown as an example by NPR, a police officer yells in the face of an ICE agent. If that were happening, would someone be filming it up close and with a steady hand? The clip is also only 10 seconds, another hint it’s fake.
A couple years ago, weird hands or nonsensical text were common giveaways of AI-generated content, but tools are getting better and these issues are less common.
Look for a watermark
It’s obvious, but some of the apps used to make AI videos, like Sora, leave a watermark on the final product. It’s not a perfect method though, because they can be cropped out.
Do your research
If you’re suspicious of a video or image, you can dig a little deeper and check the metadata. (CNET has instructions on how you can check here.) The metadata should reveal when and where a photo or video was taken – if it was really ever taken.