Posts / ai
OpenAI Wants to See Your Bank Account. Hard Pass.
There’s a particular kind of tech announcement that arrives dressed as a feature and leaves you feeling vaguely mugged. OpenAI’s push to connect ChatGPT to your bank account is one of those.
The pitch, as these things usually go, is reasonable on the surface. Let the AI see your transactions and it can help you budget, spot patterns, nudge you toward better financial habits. Useful, maybe, for some people. I don’t dismiss that entirely. Financial literacy is genuinely poor across most of the population, and tools that help people understand where their money goes aren’t inherently evil.
But.
The company asking for this access has a business model built on data. It is not a neutral party. It is not a public institution with accountability mechanisms and a mandate to serve you. It is a private company that has already attracted multiple class-action lawsuits, and whose relationship with user data is, to put it gently, contested. Handing that company a live feed of your spending history isn’t a feature. It’s an inventory.
Someone in a discussion I read put it well: this isn’t about hiding money. A bank account isn’t for hiding things. It’s for protecting private funds. The “nothing to hide” framing, which always seems to surface in these conversations, fundamentally misunderstands what privacy is for. Privacy isn’t about guilt. It’s about autonomy. It’s about the simple right to have a part of your life that isn’t being processed, scored, and fed back into a system you don’t control.
And that system doesn’t stay static. What’s benign data today can become a liability under a different government, a different regulatory environment, a different set of priorities. We’re not exactly living through a period that inspires confidence in institutions remaining stable and well-intentioned. The political weather has changed in places we didn’t expect it to, and it’s changing faster than the legal frameworks designed to protect people.
There’s also the creeping-mandatory problem. These things launch as optional. They stay optional for a while. Then, slowly, the product experience degrades for non-participants. Then the feature becomes load-bearing for other features you do want. Then opting out becomes friction, and friction becomes the quiet tax on people who actually read the terms. I’ve watched this pattern play out with location data, with health tracking, with social login. I have no reason to think it ends differently here.
My bank, for what it’s worth, sent me a survey recently about AI features in banking. I said no to everything. It felt small and probably futile. But it was the only vote I had, so I used it.
The thing I keep coming back to is this: I’m genuinely interested in AI. I think large language models are technically fascinating. I use them. I’m not a refusenik. But interest and trust are different things, and right now the organisations building these tools have not earned the kind of trust that justifies access to the most sensitive financial data most people possess. The technology is moving faster than the accountability structures around it, and in that gap, a lot of things can go wrong.
I don’t know how this particular story ends. Maybe the feature flops. Maybe regulation catches up. Maybe in five years we’ll look back and it’ll seem like a minor skirmish in a much larger argument about who owns the data trail of your life.
That argument is the one worth paying attention to.