When AI Engineers Start Studying Poetry: A Sign of Something Bigger
There’s been an interesting pattern emerging lately that’s got me thinking over my morning latte. Engineers and researchers at major AI companies – people at the absolute cutting edge of this technology – are leaving to study philosophy. And poetry. Not to start new ventures or pivot to another tech role, but to genuinely step away and contemplate what they’ve been building.
The latest case involves someone from Anthropic who’d just finished their PhD a couple of years ago, now departing to study poetry. Jack Clark, a co-founder at Anthropic, left to pursue philosophy. These aren’t burnt-out junior developers looking for a career change. These are people who’ve been staring into the depths of what these systems can do, and something about that experience has fundamentally shifted their perspective.
The cynical take, of course, is that these folks are already set financially. When you’re sitting on millions or potentially billions in stock options, sure, you can afford to indulge in the humanities. There’s truth to that – and the discussion threads I’ve been reading are full of people pointing out that not every AI engineer is a multi-millionaire walking away from their fortune. Most are earning good money (300-500k USD for ML engineers is commonly cited), but they’re not exactly retiring to study Plato after a couple of years.
But here’s the thing that strikes me: even accounting for the financial cushion, the choice itself is revealing. Philosophy and poetry aren’t exactly the go-to pursuits for tech types looking to decompress. These are disciplines about meaning, ethics, consciousness, and the human condition. It suggests these researchers have encountered something in their work that can’t be solved with better algorithms or more compute power – something that requires wrestling with fundamentally human questions.
The environmental impact of AI training alone should give us pause. The energy consumption of these massive models is staggering, and here in Australia, where we’re already grappling with climate challenges, watching the AI arms race accelerate feels particularly uncomfortable. But it’s not just the carbon footprint that’s concerning. It’s what someone in the discussion threads described as the “strangeness emerging from AI research” – these systems displaying capabilities and behaviors that even their creators find surprising or unsettling.
I work in DevOps, so I’m adjacent to this world rather than in the thick of it. But I’ve been watching the progression closely, and what’s become clear is that we’ve moved past the “will this replace jobs?” question and straight into “how do we even think about what we’re building?” territory. The technical folks I know who work with these systems daily all say the same thing: the improvement curve since December has been unprecedented. This isn’t gradual progress anymore.
What gets under my skin a bit is the sheer pace of it all. We didn’t collectively decide as a society that we were ready for this transformation. Market incentives and competitive pressures are driving decisions that will affect everyone, and the people actually doing the building are discovering in real-time that maybe we need to slow down and think harder about what we’re doing. The fact that some of them are literally walking away to study ethics and meaning-making feels less like career moves and more like warning signals.
There’s an interesting parallel to consider here. When humanity faced questions about human cloning, we actually stopped. We said “hang on, we haven’t figured out the ethical framework for this yet” and applied the precautionary principle. But with AI, we’re trying to develop the ethics while racing forward at full speed. That’s a fundamentally different approach, and it’s driven entirely by commercial competition rather than thoughtful deliberation.
I’m not convinced that means doom is inevitable – I’m not that pessimistic. But I do think we’re in a weird phase where the people closest to the technology are having profound philosophical crises about what they’re creating, while the rest of us are just getting used to ChatGPT writing our emails. That disconnect is significant.
What would a more responsible approach look like? Probably something that feels impossible in the current environment: slowing down. International coordination on AI development standards. Serious investment in understanding these systems before deploying them everywhere. Actually listening when researchers leave to study philosophy instead of just backfilling their positions and carrying on.
The engineers studying poetry aren’t just taking a break from the grind. They’re telling us something important about what they’ve seen in those latent spaces and emergent capabilities. We’d be wise to pay attention, even if – especially if – we’re not the ones building these systems. Because ready or not, the transformation is coming for all of us, and some of the people who know it best are stepping back to ask the most fundamental human questions about what it all means.
I just hope we collectively figure out some answers before the momentum becomes truly unstoppable.