When the Icarus Class Flies Too Close to the Sun
There’s a story doing the rounds this week that probably shouldn’t surprise anyone paying attention, but here we are. Someone apparently discussed “Luigi-ing” tech CEOs in an online chat — a reference that’s become grimly shorthand since the healthcare CEO shooting in the US late last year. The suspect in a plot targeting Sam Altman has been arrested, and the internet has responded with… well, not exactly an outpouring of sympathy for the OpenAI boss.
I want to be really clear upfront: violence is not the answer here, and I’m not going to pretend otherwise. But I’d be lying if I said the broader conversation unfolding online isn’t striking some nerves that I think deserve genuine examination rather than a quick moral dismissal.
Because here’s the thing. When the healthcare CEO was killed last year, something shifted. The media scrambled to find the right notes of outrage, and discovered an audience that simply wasn’t playing along. That’s not a celebration of violence — it’s a symptom of something much deeper and much more uncomfortable. People are exhausted. They’re frightened. And they’re watching a very small group of extraordinarily wealthy men essentially reshape the global economy around their own interests, bragging about it openly, and then retreating to fortified compounds in New Zealand when the temperature rises.
Someone in the online discussion around this story put it well: the tech billionaires have spent years preparing escape hatches because they know exactly what they’re doing. The bunkers, the private islands, the security details — these aren’t the actions of people who believe they’re building a better world. These are the actions of people hedging against the consequences of their own choices.
And Sam Altman sits right at the centre of this tension in a particularly pointed way. Here’s a man who founded OpenAI as a nonprofit, explicitly framing it as a project for the benefit of humanity. The moment regulatory pressure eased, that framing evaporated faster than a Melbourne morning fog. Now he openly talks about AI replacing entire categories of workers, seemingly with more enthusiasm than remorse. The “learn to code” generation — people who did exactly what they were told, who pivoted their careers toward technology because it seemed like stable, creative, meaningful work — are now watching those same doors close in real time, and the person holding them shut is celebrating it.
Working in IT myself, I feel this one in my bones. I’ve spent the better part of two decades building skills, mentoring juniors, keeping up with a field that moves at a punishing pace. The junior developers coming through now are increasingly “vibe coders” — prompting their way through problems they don’t deeply understand, building on foundations they’ve never actually examined. I’m not against AI tools; I use them daily. But there’s a cathedral-and-bazaar problem emerging here that nobody in the C-suite seems to want to reckon with. When the people who understood the underlying systems have retired, and the people who replaced them only ever worked through AI intermediaries, and the AI gets something catastrophically wrong — who exactly is going to fix it? That’s not hypothetical hand-wringing. That’s a genuine crisis brewing in slow motion.
The broader historical echoes here are impossible to ignore if you’ve spent any time reading about what actually happens when economic inequality reaches certain thresholds. The French Revolution isn’t just a fun topic for history documentaries — it’s a case study in what occurs when a ruling class mistakes the absence of immediate consequences for permanent safety. The myth of Icarus works too. So does A Tale of Two Cities, which apparently people have been quoting with increasing frequency and increasing grimness.
None of this is to say that shooting people is how we solve structural inequality. It very clearly isn’t. Political organisation, collective bargaining, general strikes, voting out governments that serve billionaire interests — these are the tools that actually build durable change. And yes, I know the eye-rolling response to that: “just vote harder” gets treated as a punchline these days. But the alternative — normalising political violence — historically tends to consume everyone indiscriminately, including the people who thought they were directing it.
What frustrates me most, sitting here in Melbourne watching all this unfold from a comfortable distance that may not stay comfortable, is that the people with the most power to prevent this are apparently the least interested in doing so. UBI discussions, genuine profit-sharing, meaningful retraining programs, regulatory frameworks that treat AI displacement as a social responsibility rather than a quarterly win — these aren’t radical ideas. They’re what functional societies do when technology disrupts labour at scale. The alternative, as history keeps insisting on demonstrating, is considerably messier.
The tech billionaires building doomsday bunkers already know which future they’re betting on. Maybe it’s time the rest of us got serious about building a different one.