When AI Makes the Worst of Humanity Even Worse
I’ve been working in tech for over two decades now, and I thought I’d seen it all. The dot-com bubble, the rise of social media, the shift to cloud computing, the whole DevOps revolution. But this latest saga with Grok AI and sexual deepfakes? This is something else entirely, and not in a good way.
The UK’s privacy watchdog has opened an inquiry into X (formerly Twitter) over Grok AI being used to create sexual deepfakes. And honestly, it’s about bloody time someone in a position of authority started taking this seriously. While we’re watching AI capabilities explode at an almost incomprehensible rate, we’re also seeing the absolute worst applications of this technology emerge just as quickly.
Look, I’m fascinated by AI. I genuinely am. The progression we’ve seen in just the last few years is staggering. But this is exactly the kind of thing that keeps me awake at night when I think about where we’re headed. Someone in the discussion thread put it perfectly: technology just makes us more of what we already are. If you’re a decent person, it amplifies that. If you’re not… well, here we are.
The really infuriating part is how predictable this was. Anyone with half a brain could have seen that powerful generative AI without proper safeguards would be weaponized for harassment and abuse. And yet, here we are, playing catch-up with regulation while real people’s lives are being destroyed by deepfake pornography and worse. The fact that child exploitation material is apparently being generated through this system? That’s not just a bug or an oversight – that’s a catastrophic failure of corporate responsibility.
What strikes me about the online discussion around this is the contrast between different jurisdictions. French police have apparently raided Twitter’s office in Paris. The UK is launching inquiries. Meanwhile, there’s a certain cynicism from folks in the US who point out that their government seems entirely uninterested in holding tech companies accountable for anything these days. From where I’m sitting in Melbourne, watching our own government struggle with how to regulate social media and AI, it feels like we’re all just making it up as we go along.
There’s a legitimate debate to be had about free speech and government overreach. I get that. But when someone tries to frame the creation and distribution of non-consensual sexual imagery as a free speech issue, that’s where I draw a hard line. This isn’t about censoring political dissent or stifling creativity. This is about preventing real harm to real people. There’s a world of difference between criticizing the government and creating fake pornographic images of someone without their consent.
The tech industry – my industry – has a responsibility here that we’ve been failing spectacularly to meet. We’ve been so focused on moving fast and breaking things that we haven’t stopped to consider what we’re breaking and who gets hurt in the process. And when I say “we,” I mean it. I’m part of this ecosystem, even if I’m just a DevOps guy keeping servers running rather than building AI models.
From my corner of the IT world, I see the infrastructure that makes all of this possible. The compute resources needed to train and run these models are enormous. The environmental footprint alone should give us pause, but beyond that, every server rack, every GPU cluster, every bit of bandwidth being used to generate and distribute this content represents resources that could be used for so much better.
What frustrates me most is how avoidable this was. Basic content moderation, safety filters, user verification – these aren’t revolutionary concepts. They’re standard practice in any responsible platform. But when you’re racing to be the first to market with the most powerful AI, when you’re more concerned with engagement metrics than human welfare, you get exactly what we’re seeing now.
There’s a thread of conversation I noticed about the UK’s laws around sexual content being overly restrictive in some areas while failing to address real harms in others. That’s a fair critique. Laws written by people who don’t understand technology often miss the mark. But the solution isn’t to throw our hands up and declare everything should be allowed in the name of free speech. The solution is better, more thoughtful regulation that actually targets harmful behavior while protecting legitimate expression.
We need governments that understand technology well enough to write sensible laws. We need tech companies that prioritize safety over growth. We need users who demand better from the platforms they use. And we need consequences – real consequences – for executives who knowingly allow their systems to be used for exploitation and abuse.
The fact that some commenters are calling for executives to be jailed and put on sex offender registries might sound extreme, but when your platform is facilitating the creation and distribution of child sexual abuse material, even if it’s AI-generated, that’s exactly the kind of accountability we should be demanding. This isn’t a technical glitch or an unfortunate side effect. This is a fundamental failure of duty of care.
I want to believe in the promise of AI. I really do. The potential for good is enormous – advances in medicine, climate modeling, accessibility, education. But we’re squandering that potential by allowing the worst impulses to run rampant while those responsible hide behind claims of innovation and free speech.
This inquiry from the UK privacy watchdog is a start, but it’s only a start. We need coordinated international action. We need tech companies to be held accountable. And we need to collectively decide that some applications of technology are simply not acceptable, no matter how technically impressive they might be.
The path forward isn’t to ban AI or stop innovation. It’s to build the systems and structures that ensure these powerful tools are used responsibly. It’s to create consequences for those who enable harm. It’s to prioritize human welfare over corporate profits. And it’s to remember that just because we can build something doesn’t mean we should.
We’re at a crossroads with AI technology, and the decisions we make now will echo for decades. Let’s hope we choose wisely.