As society grapples with the rapid advancement of AI and synthetic media, we’ve been asking the wrong question. The focus on whether content is “real or fake” misses the more crucial question: “Is this media deceptive?”
This shift in perspective is essential because we’re witnessing an unprecedented gap between human adaptability and technological advancement, particularly in how we process and verify information.
The landscape of media creation and manipulation has been democratized to an extraordinary degree. What once required nation-state resources and expertise can now be accomplished with $20 monthly subscriptions to AI tools. This accessibility isn’t just changing who can create sophisticated content – it’s fundamentally altering the nature of all digital media.
From Instagram filters that erase pores to word processors that suggest grammar improvements, AI’s fingerprints are increasingly present in even our most “authentic” content. This ubiquity makes traditional detection methods not just unreliable, but potentially counterproductive.
The tendency to focus on detecting “tells” in synthetic media – like distorted fingers or unnatural hair in AI-generated images – creates a dangerous false sense of security. These superficial markers are easily corrected by determined bad actors, while sophisticated deception often bears no such obvious signs. More importantly, this approach fails to address the fundamental challenge: even completely unaltered media can be deeply deceptive when presented in a misleading context.
This challenge is amplified by what can be called the “Four Horsemen of Online Vulnerability.” First, confirmation bias leads people to readily accept information that aligns with their existing beliefs. Second, the emotional tempest of fear, anger, and uncertainty clouds rational judgment. Third, digital naivety leaves many unaware of what’s technologically possible. Finally, sowers of discord exploit these vulnerabilities to create division and confusion.
Perhaps most troubling is the emerging “post-truth” mindset, where people acknowledge content may be synthetic but defend sharing it because it “represents a truth” they believe in. This rationalization was clearly demonstrated in the case of the AI-generated image of a girl with a puppy during Hurricane Helene – when confronted with evidence the image was synthetic, sharers often responded that its literal truth didn’t matter because it represented an emotional or political truth they supported.
Rather than relying on increasingly futile technical detection methods, we need a new framework for evaluating media – what I call the FAIK framework. Here it is:
- F: Freeze and Feel: stopping to examine what emotions the content is triggering in us.
- A: Analyze (the Narrative, claims, embedded emotional triggers, possible goals)
- I: Investigate (Is this reported across reliable news sources? Who/where did this come from? Which photos/details/etc. are verifiable?).
- K: Know, Confirm, and Keep vigilant
This framework acknowledges a crucial reality: in our modern information environment, the most dangerous deception often comes not from sophisticated technical manipulation, but from simple narrative warfare. A genuine photo from one context can become powerful disinformation when repurposed with a false narrative.
One example I use to demonstrate this is when, in 2019, Turning Point Media used a carefully cropped photo of empty grocery store shelves in the aftermath of a 2011 Japanese earthquake. They cropped out all tells of where and when the photo was taken and repurposed the manipulated image as a warning against socialism. And BTW, Japan is an intensely capitalist country.
As we move deeper into this era of synthetic and deceptive media, our challenge isn’t primarily technical – it’s cognitive and emotional. We need to develop new mental models for evaluating information that go beyond simple binary determinations of authenticity.
The question isn’t whether something is real or fake, but what story it’s telling, who benefits from that story, and what actions or beliefs it’s designed to propagate. Only by understanding these deeper dynamics can we hope to navigate an information landscape where the line between synthetic and authentic becomes increasingly meaningless.
At the end of the day, we need to ask different questions. Here are a few pithy ways to get at the core issue:
- “People keep obsessing over whether media is synthetic when they should be asking whether it’s deceptive.”
- “The real question isn’t ‘Is this synthetic media?’ but ‘Is this deceptive media?'”
- “Stop asking if it’s synthetic. Start asking if it’s deceptive.”