The Truth Will Set You Free (But First It Has to Survive)
I’ve been thinking a lot lately about truth. Not in some philosophical, navel-gazing way, but in a very practical sense: how do we know what’s real anymore?
There’s been quite a bit of discussion online about an AI safety researcher resigning from Anthropic, warning that the world is “in peril.” And while my first instinct was to roll my eyes at yet another doom-and-gloom headline, the more I read about it, the more the knot in my stomach tightened.
The researcher wasn’t primarily worried about job losses or Skynet becoming sentient. The concerns were more insidious: AI chatbots distorting people’s perception of reality through thousands of daily interactions, automated malicious actors manipulating opinions at scale, and people with mental health issues suddenly having access to sophisticated guidance on creating harmful substances. It’s not the dramatic Hollywood ending we’re headed for—it’s something far more mundane and terrifying.
Someone in the discussion thread made a point that’s been rattling around in my head: “People born before AI will have some knowledge about finding what is true. But the generations to come will be slave to what AI tells them is truth.”
That hit differently. My daughter is 15. She’s growing up in a world where she can’t trust a video, can’t trust a voice recording, can’t even trust a photo anymore. And we haven’t equipped her generation with the tools to navigate this. Hell, we barely understand it ourselves.
The optimists argue that truth is safe long-term, that we’ll adapt, that encryption and verification systems will save us. And maybe they’re right. But then I think about how measles is making a comeback because people “did their own research” on the internet. We had decades of verifiable proof that vaccines work, and yet here we are. If we couldn’t maintain consensus on basic medical science with relatively simple misinformation, what hope do we have against AI-generated content that’s exponentially more sophisticated?
I spent my career in IT. I’ve watched the internet evolve from a tool for sharing knowledge into… well, whatever this is now. I remember thinking high-speed internet would democratize information, that everyone would have access to humanity’s collective wisdom. That optimism feels embarrassingly naive now. We didn’t anticipate social media hijacking our psychology, didn’t foresee how easily false information would spread, didn’t predict how tribalism would be algorithmically amplified for profit.
The discussion about AI reminds me painfully of the early days of social media. Back then, we were told it would connect people, build communities, foster understanding. Now we’re seeing devastating statistics from Gen Z, increased anxiety and depression, attention spans shredded, and political polarization that’s tearing societies apart. And that was just from platforms designed to share photos and status updates.
AI is several orders of magnitude more powerful. We’re not talking about a gradual adjustment here—this is a hard left with no off ramp, as one commenter aptly put it. The AI companies are full steam ahead, each one racing to one-up the others, venture capital pouring in by the billions, and meanwhile governments are run by people who can barely right-click, let alone comprehend the implications of large language models that can convincingly fake anything.
What really gets me is the acceleration. Someone pointed out that their parents went from black and white TVs with six channels to having to contend with robots scamming them with imaginary money. The rate of change is brutal, even for those of us who are tech-literate and curious. For everyone else? It’s overwhelming.
And yet, I’m not ready to embrace the “moon or bust” accelerationist mindset that seems popular in tech circles. The attitude of “it’s inevitable, so we might as well go all in and hope it doesn’t murder us all” is genuinely insane. It doesn’t have to be inevitable. We made choices about social media—mostly bad ones—and we can make different choices about AI.
But we need to make them now. Not after a few high-profile disasters. Not after an entire generation has been psychologically shaped by AI chatbots that flatter and manipulate them. Not after truth becomes so subjective that we can’t agree on basic facts anymore.
The problem is, I’m not sure we have the collective will to pump the brakes. The tech companies certainly don’t want to slow down—they’ve bet their reputations and trillions of dollars on this. They’ve “Cortez’d themselves,” burning their ships, and they’re taking the rest of us with them.
Still, I hold onto a shred of hope. Maybe society will hard-correct. Maybe we’ll see a resurgence of valuing tactile, real-world experiences that AI can’t replicate. Maybe we’ll get better at verification and authentication. Maybe my daughter’s generation will be more skeptical and savvy than we give them credit for.
But we can’t just hope for the best. We need to demand better safeguards, more transparency, and actual regulation that isn’t written by the companies themselves. We need to teach critical thinking and information literacy like our lives depend on it—because they might.
Truth matters. Reality matters. And we’re running out of time to figure out how to protect both in a world where neither can be taken at face value anymore.