When Standing Up Means Something (Even If It's Complicated)
There’s been quite a bit of chatter online lately about Anthropic’s decision not to renew their partnership with certain government agencies, and honestly, it’s given me a lot to think about during my morning brew this week.
The thing that strikes me most is how quick we are to either completely lionise or utterly condemn companies when they make these kinds of decisions. I’ve been reading through various discussions, and it’s fascinating how polarised people are. Some are celebrating Anthropic as heroes standing up to power, while others are pointing out their existing contracts with companies like Palantir and saying it’s all performative nonsense.
Here’s the thing though – and this is where I get a bit ranty – demanding perfect moral purity before we acknowledge any positive action is exactly how we end up with no one drawing any lines at all. Someone made a good point that you don’t need clean hands to draw a line in the sand. And they’re right. If we only celebrate ethical stands from companies with spotless records, we’ll be waiting forever. There are no spotless companies in tech. Hell, there are no spotless companies period.
Look, I’m not naive. I know Anthropic has defence contracts. I know they’re a for-profit company trying to navigate an incredibly complex landscape of government relationships, investor expectations, and actual AI safety concerns. But here’s what matters: they risked their supply chain to make this decision. In an environment where most tech companies are falling over themselves to court government contracts regardless of the administration, that actually means something.
The cynic in me – and there’s plenty of that after twenty-something years in IT – wants to question their motives. Maybe it’s just good PR. Maybe they’ve done the calculations and decided this stance will win them more customers than it loses them in government contracts. But you know what? Even if that’s true, it still creates an incentive structure where companies benefit from taking ethical stands. That’s not nothing.
What really frustrates me is seeing people dismiss this entirely because it’s not perfect. Someone mentioned that Anthropic has worked on defence projects involving Iran. Yes, and that’s concerning. But there’s a world of difference between assisting with targeted military operations (where humans make the final calls) and domestic mass surveillance or fully automated weapons systems. The nuance matters here.
I’ve been a Claude subscriber for over a year now, and I’ll be honest – the usage limits have been increasingly annoying. There are times I’ve had to supplement with other services just to get my work done. But watching this unfold, seeing a company actually willing to risk something for a stated principle… it makes me more inclined to stick with them, even when I’m frustrated about hitting my token limit mid-week.
The contrast with other major AI companies is stark. We’ve seen how quickly principles can evaporate when there’s government pressure or lucrative contracts on the table. So when a company does push back, even imperfectly, even with mixed motives, even while maintaining other controversial partnerships – it deserves recognition.
This isn’t about putting Anthropic on a pedestal. They’re not saints. But they’ve demonstrated that there are some lines they won’t cross, at least not right now, at least not publicly. In the current climate, where we’re watching democratic norms get stress-tested daily, that matters more than we might want to admit.
The real test will be whether they maintain this stance when the pressure increases. Talk is cheap, and so are one-off decisions. Consistency over time – that’s what will separate genuine ethical commitment from clever marketing. But we won’t know unless we give them the space to prove it, rather than dismissing it out of hand because they’re not perfect.
Maybe I’m being too generous. Maybe in six months I’ll be writing a post about how disappointed I am that they caved. But right now, I’d rather live in a world where we acknowledge when companies take any ethical stand, however imperfect, than one where we create such impossible standards that no company even bothers trying.
At the end of the day, we get the corporate behaviour we incentivise. If we only criticise and never acknowledge, we shouldn’t be surprised when companies decide ethical considerations aren’t worth the trouble.