The Great Local AI Misunderstanding
I’ve been watching an interesting phenomenon unfold online lately, and it’s left me equal parts amused and frustrated. There’s this persistent belief floating around anti-AI circles that if the big tech companies collapse or stop developing AI models, then somehow all AI capabilities will just… vanish. Like it’s some kind of cloud-based subscription service that gets shut off when the bills aren’t paid.
It’s a genuinely baffling misunderstanding of how technology actually works.
The reality is that I’m sitting here right now with several models downloaded on my machine – my trusty MacBook – that run entirely offline. No internet connection required. No OpenAI servers. No Microsoft data centres burning through electricity. Just silicon, code, and some GGUF files sitting in my local storage. I could literally disconnect my ethernet cable right now, and these models would keep working just fine.
The whole concept seems to be utterly eldritch to some people, to borrow a phrase I saw recently. Someone tried to explain this in a discussion thread, pointing out that if you have a runtime like llama.cpp or Ollama, and you’ve downloaded a model, you can run it regardless of what’s happening with the big tech companies. You could unplug your wifi entirely and it would still work. Yet people struggle to grasp this fundamental concept.
Part of the problem, I reckon, is that we’ve done such a thorough job of abstracting technology away from users that the underlying mechanics have become invisible. The whole “cloud” terminology hasn’t helped – it’s encouraged magical thinking. People genuinely don’t understand that “the cloud” is just someone else’s computer. When you frame it that way, it becomes obvious that the same workloads can run on your computer, scaled appropriately.
What really gets me is the circular logic in some of these arguments. People insist that AI is this massively resource-intensive technology that’s destroying the environment (which, fair enough, training large models does consume significant power), but then they simultaneously can’t conceive of it running on a personal computer. Both can’t be true in the way they’re thinking about it. Yes, training GPT-4 required enormous computational resources. But running a quantised 7B parameter model for inference? That’s a completely different story.
I’ve spent enough time in DevOps to know that scale matters enormously. Those data centres that AI companies talk about aren’t serving one person at a time – they’re handling millions of concurrent users. When you’re just running a model for yourself, the requirements drop dramatically. It’s the difference between running a website that needs to handle peak traffic of millions versus running a development server on localhost.
The thing that frustrates me most about these discussions is the lack of intellectual curiosity. People have decided AI is bad, therefore they’re not interested in learning how it actually works. It becomes almost like a religious belief – impervious to evidence or explanation. Someone mentioned in a discussion that anti-AI people hold their views as religious beliefs, and honestly, that tracks with what I’ve seen.
I’m not suggesting everyone needs to become an expert in transformer architectures or understand the intricacies of quantisation. But there’s a difference between not knowing the technical details and actively refusing to understand the basic premise. When you’re arguing passionately about something, don’t you have a responsibility to at least understand the fundamentals?
Here’s the thing that should worry the anti-AI crowd: if their entire strategy is based on hoping the commercial AI companies fail and take AI capabilities with them, they’ve already lost. The models are out there. The techniques are documented. The code is open source. Even if every AI company went bankrupt tomorrow (which won’t happen – at worst, there’d be consolidation), the technology isn’t going anywhere. Microsoft, Google, and Meta aren’t going to suddenly forget how to build these systems. And more importantly, individuals can already run them at home.
Look, I’ve got my own concerns about AI. The environmental footprint of training large models is real and worth discussing. The impact on creative industries deserves serious consideration. The potential for misinformation and manipulation is genuinely worrying. But you can’t have an honest conversation about any of these issues if you’re starting from a fundamental misunderstanding of what AI even is or how it works.
The genie is out of the bottle. It’s not going back in. We need to be having conversations about how to manage this technology responsibly, how to ensure it’s developed sustainably, and how to protect people from its potential harms. But we can’t do that if one side of the conversation thinks AI is some kind of dark magic that only exists in corporate data centres.
Maybe what we need is better tech literacy across the board. Not everyone needs to be able to configure a local LLM server, but perhaps understanding that technology runs on physical hardware you can own would be a good starting point. The alternative is having increasingly disconnected conversations where people talk past each other because they’re literally not working with the same understanding of reality.
And that’s not particularly useful for anyone, is it?