Face Scans Just to Chat Online? No Thanks.
Something’s been gnawing at me this week. I stumbled across a discussion online about how more and more apps are quietly rolling out facial verification — not just government services or banking, but social platforms, dating apps, even community spaces. And the question someone raised stuck with me: are we just normalising this now?
The short answer, if the general mood of that conversation was anything to go by, is: yes. And that should bother all of us a lot more than it apparently does.
Look, I get it. Bots are everywhere. Catfishing is real. Kids lie about their age online constantly. These are genuine problems worth solving. But there’s a massive difference between solving a problem and using a problem as cover to hoover up your biometric data forever. That distinction seems to be getting lost in the shuffle, and it’s being lost very conveniently for the companies involved.
The thing that really struck me in the discussion was how someone neatly articulated what I’ve been feeling but couldn’t quite put into words: proving you’re a real human and handing over your identity are completely separate things. We’re letting companies bundle them together because it’s easier for them, not because it’s necessary. Zero-knowledge proof systems already exist — there are projects out there that can verify you’re a living, breathing adult without storing a single piece of your biometric data on some server waiting to be breached. The technology is there. Companies just aren’t using it, because frankly, the data itself is probably more valuable to them than whatever problem they’re pretending to solve.
And that’s the part that makes me genuinely angry.
Think about what’s actually happening here. You do a face scan. That data sits on a centralised server. That server — whether it’s a dating app, a chat platform, or whatever — becomes a target. We’ve watched Optus, Medibank, and a dozen others get absolutely cleaned out in data breaches here in Australia over the past few years. Millions of Australians had their personal details exposed. Now imagine that same level of negligence, but the stolen data is your face. Your biometrics. Something you literally cannot change. You can cancel a credit card. You cannot get a new face.
Someone in the discussion made the excellent point that AI is making identity theft trivially easy, and we’re simultaneously making it trivially easy for bad actors to acquire the most sensitive identifiers we have. We’re running full speed in the wrong direction.
What frustrates me as someone who’s worked in IT for a long time is that this isn’t even a hard technical problem. The solutions exist. Privacy-preserving verification is a real, working concept. But it requires companies to choose user privacy over data accumulation, and that’s a choice most of them simply aren’t incentivised to make. So instead we get security theatre dressed up as safety features, and most people just tap “allow” and move on.
I’ve started thinking about which services I actually need versus which ones I’ve just gotten into the habit of using. My homelab has been sitting there underloved for a while now, and honestly, conversations like this one remind me why self-hosting certain things is worth the effort. Not everyone has that option — and I want to be careful not to sound like a tech bro going “just run your own server, mate” — but for those of us who can, it’s worth revisiting.
The broader issue here is about power and who holds it. Every face scan is another small surrender, another piece of yourself handed over to a corporation — often one headquartered somewhere outside Australia’s jurisdiction, with its own privacy laws, or lack thereof. One person in the discussion made the sensible move of emailing their local MP about exactly this, pointing out that handing biometric data to foreign companies creates real sovereign risk. That’s exactly the kind of pressure that needs to be applied, because individual opt-outs, while satisfying, don’t change the systemic problem.
The hopeful bit — and there is one — is that the pushback is real and getting louder. People are deleting apps, writing to representatives, and actively looking for alternatives. The conversation about zero-knowledge proofs and privacy-preserving verification is moving from niche technical forums into more mainstream awareness. That matters. Regulation can follow public pressure, and we’ve seen Australia move on digital privacy issues before when enough noise is made.
So yeah, delete the app if it asks for a face scan and can’t justify why it genuinely needs one. Write to your MP. Ask companies harder questions. And maybe, if you’ve been putting off that homelab project like I have — this week feels like a good time to finally get started.