When the Machines Get Fast but the Meetings Don't
I’ve been watching the AI layoff theatre with growing frustration, and there’s something fundamentally broken about how this whole thing is playing out.
Block cuts 4,000 people and blames AI. Atlassian drops 1,600. Shopify literally tells employees to prove AI can’t do their job before they can get more headcount. The CEO makes the announcement, the stock price nudges upward, and everyone nods along like this makes perfect sense. Except it doesn’t, because six months later, 55% of those same CEOs admit they regret the cuts, and companies like Klarna are quietly rehiring the humans they replaced after their AI-driven customer service quality went off a cliff.
The data tells a fascinating story that completely contradicts the headlines. S&P Global found that 42% of companies abandoned their AI initiatives in 2025, up from just 17% the year before. Think about that for a moment. Nearly half of companies tried to implement AI and gave up. That’s not a story about AI replacing workers – that’s a story about management not understanding what they’ve actually got on their hands.
I’ve spent the better part of my career in DevOps, and I recognise this pattern immediately. It’s the classic case of optimising the wrong bottleneck. AI has absolutely compressed execution time – what used to take a development team weeks can now happen in hours or days. Prototyping is ridiculously fast. Code gets written faster. Content gets produced faster. The actual doing part of work has accelerated dramatically.
But here’s the thing: the coordination layer didn’t speed up at all. The approval chains, the quarterly planning cycles, the review processes, the decision-making frameworks – all of that is still moving at 2015 speeds while the execution layer is running at 2025 speeds. It’s like upgrading to gigabit internet but still using a 56k modem for the last mile.
The bottleneck has flipped from “can we build it fast enough” to “does leadership actually know what to build, and can they keep up with the teams building it?” And instead of addressing that mismatch, companies are cutting the people who got faster while leaving the layer that’s causing the actual slowdown completely intact.
There’s a brilliant counter-example that doesn’t get nearly enough attention. Monday.com automated 100 sales development reps with AI, but instead of firing them, they redeployed them. Their CEO’s reasoning was simple: “Every time we eliminate one bottleneck, a new one emerges.” That’s basically the Theory of Constraints from 1984, and it’s bloody spot-on. The constraint doesn’t disappear – it just moves somewhere else in the system.
What Monday.com understood is that when you automate someone’s routine work, you don’t eliminate them – you free them to tackle whatever the new bottleneck is. In their case, it turned out to be quality control, strategic direction, and all the messy human judgment calls that AI still can’t handle reliably.
I’ve seen this play out in smaller teams too. Someone mentioned automating content production workflows, only to discover the bottleneck immediately shifted to editorial review and quality assurance. The people who used to spend days writing drafts became more valuable doing the work AI struggles with – context, nuance, catching assumptions, understanding what “good” actually looks like.
The uncomfortable truth is that AI handles the easy 80% of most knowledge work brilliantly, but the remaining 20% – the judgment calls, the edge cases, the institutional knowledge that lives in people’s heads – is exactly what you lose when you cut headcount. And that 20% is often the difference between mediocrity and excellence.
What really gets under my skin is the intellectual dishonesty of it all. One person put it bluntly: companies are using AI as cover for headcount cuts they wanted to make anyway. The technology becomes the excuse, the stock market rewards the narrative, and six months later when the quietly rehire, nobody connects the dots publicly.
There’s also the question of what AI actually accelerates. Someone made a sharp observation that AI compresses the median case dramatically but creates more coordination overhead on the tail cases. Edge cases, ambiguous requirements, failures – all the messy stuff that humans used to catch mid-stream. Now you get a fast build that sometimes confidently goes in the wrong direction for hours before anyone notices. Whether you come out ahead depends entirely on how common those edge cases are in your particular domain.
The companies succeeding with AI are treating it as a change management problem first and a technology problem second. They’re mapping actual workflows, getting buy-in from people whose jobs will change, defining success metrics before writing a single prompt. They’re recognising that you can’t just drop AI into existing processes like a magic intern and expect miracles.
The ones failing are treating AI adoption like a light switch rather than a process redesign. They’re confused about what the actual bottleneck was. Nobody was bottlenecked on typing speed. They were bottlenecked on figuring out what the right thing to build even was.
Look, I’m fascinated by AI technology – I genuinely am. The capabilities are remarkable and they’re advancing faster than I expected. But I’m also watching companies make catastrophically stupid decisions in the name of “AI strategy,” decisions that reveal a fundamental misunderstanding of where value actually comes from in knowledge work.
The constraint has shifted upstream to clarity, context, and coordination – fundamentally human problems that AI doesn’t touch. The companies that figure this out will redeploy and restructure rather than just cut. The ones that don’t will burn through a quarter or two of “productivity gains,” watch their quality collapse, and quietly reverse course while pretending nothing happened.
The good news? This is correctable. The data on abandoned initiatives and regretful CEOs suggests reality is starting to push back against the hype. The question is whether companies can adapt before they lose the institutional knowledge and human expertise that makes the AI tools useful in the first place.
Because here’s the thing about bottlenecks: you can only move as fast as your slowest constraint allows. Speed up everything else all you want – if leadership can’t keep up with the teams they’re managing, you haven’t solved anything. You’ve just made it more obvious where the real problem is.