Posts / ai

Meta's Ray-Ban Glasses and the People We Never Think About


There’s a story doing the rounds this week that I can’t stop thinking about, and it’s not really about the glasses. Well, it starts with the glasses — Meta’s Ray-Ban smart glasses recording people in bathrooms, in intimate moments, capturing banking details — all of it being piped through to AI trainers who were then fired when they had the audacity to speak up about it. Over a thousand workers, gone, after blowing the whistle on what they were being asked to review.

But here’s the thing that actually got under my skin: those workers were based in Kenya. They were being paid next to nothing — reports suggest around $2 a day in some cases — to spend their working hours reviewing some of the most intimate and disturbing footage imaginable. No proper support, no real employer (because the contracting chain is deliberately structured to make sure nobody is actually responsible for anything), and absolutely no protection when they decided their conscience outweighed their paycheck.

There’s actually a joke in Kenya that AI stands for “African Intelligence.” Read that again and let it sit with you for a moment.

I work in IT. I’ve been around long enough to understand how outsourcing works, and I’ll be honest — the industry has always had a comfortable way of not looking too hard at the human cost buried in those supply chains. You spin up a contract, the work gets done somewhere distant and cheap, and nobody at the top has to think too hard about the conditions. But what’s happening with AI training labour is something else entirely. These aren’t people doing rote data entry. They’re being exposed to genuinely traumatic content, day after day, with no meaningful mental health support and a wage that’s frankly insulting given what’s being asked of them.

A piece by 404 Media — which has been doing some genuinely excellent journalism on this stuff lately — goes into detail about how these workers are organising and fighting back. It’s worth your time to read. One comment I saw online summed it up well: these people came forward despite how little they were being paid, not because of it. That takes real courage.

And then Meta fires them and the story conveniently becomes about the contractor being “scummy” rather than about the company that commissioned the work, set the terms, and benefited from every hour of it.

This is the part that genuinely frustrates me about the current AI boom. Everyone — and I include myself here, because I find this technology genuinely fascinating — gets caught up in the capabilities, the pace of progress, the philosophical questions about what it all means for humanity. And meanwhile, the actual human infrastructure holding it all together is invisible. It’s workers in Kenya, in the Philippines, in India, reviewing content that would give most of us nightmares, so that the models can be “safe” and “aligned” and ready for us to marvel at on our iPhones.

It’s not entirely unlike the supply chains behind our cheap electronics, or fast fashion, or half the stuff we scroll past on the very platforms built on this labour. We’ve just found a new way to make exploitation feel like innovation.

The glasses themselves are almost a side issue at this point. Though let’s be clear — the concept of normalising always-on cameras worn on someone’s face in public spaces is genuinely alarming. I love the idea of useful AR glasses, the same way I love the idea of lots of technology. Accessibility features, navigation, memory aids for people with cognitive impairments — there are real, meaningful use cases. But the execution, by companies with Meta’s track record, in the absence of any serious privacy regulation? No thanks.

Australia has been slowly, imperfectly, trying to grapple with some of these privacy questions. We’ve seen the ongoing debate around facial recognition, the data retention laws, and more recently the push around social media age restrictions. But we’re still miles behind where we need to be, and frankly, so is most of the world, when it comes to the labour rights dimension of AI development.

If there’s anything constructive to take from all of this, it’s that the workers in Kenya who spoke up actually did something remarkable. They had the least power in the entire chain — no job security, no union protections, no direct relationship with the company ultimately profiting from their work — and they still said this isn’t right. That matters. And the journalists at outlets like 404 Media who are actually doing the legwork to tell these stories matter too.

The least we can do is pay attention, support independent journalism that covers this stuff, and stop letting the sheer coolness of the technology distract us from asking who’s actually paying for it.