Encrypted by Default: Inside Lumo’s Privacy-First AI Revolution

Written by Ronald Shabazz

Most of us don’t think twice about what happens after we hit send. You ask ChatGPT a question about your business strategy, your health condition, your marriage—and boom, you get an answer in seconds. That’s the win we see. But that’s also where most of us stop looking.

Last October, Stanford researchers dropped a study that made me actually uncomfortable. They dug through the privacy policies of six major AI companies—Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI—and found something I’d suspected but never seen spelled out so clearly: all six companies use your conversations to train their models by default. Anthropic, which makes Claude, even quietly updated its terms of service to lock this in. That’s not paranoia. That’s how the business works now.

I started thinking about what this actually means. You’re not having a private conversation with an AI. You’re feeding it. And what you feed it shapes what it becomes—and how it gets used to sell you things later.

Here’s where it gets weird

Jennifer King, who led the Stanford research, walked through a scenario that stuck with me. Imagine you ask an AI for low-sugar dinner ideas because you’re watching your blood pressure. That seems harmless. But the algorithm infers you’re health-vulnerable. That inference ripples through the company’s ecosystem—you start seeing ads for medications, maybe your insurance company finds out, maybe your rates go up. It’s not a conspiracy. It’s just how data flows when companies own everything.

That’s the second-order effect nobody thinks about when they’re asking their chatbot for quick answers.

Then there’s Lumo

Proton released Lumo in 2025, and I’ll be honest—I was skeptical at first. Another privacy-focused AI tool? Great, another thing that works worse but makes you feel better. Except it doesn’t work worse. It’s just… different.

The basic deal is simple: Lumo doesn’t store your conversations on their servers. Your device encrypts everything. Proton can’t read your chats even if they wanted to. The company literally cannot use your data to train models because the architecture doesn’t allow it. That’s not a promise. That’s a lock.

Your training data comes from curated sources up through October 2023—not from harvesting your words for profit. Yeah, it can’t browse the internet on its own. Yeah, it won’t hallucinate an answer when it doesn’t know something. Some people find that annoying. I find it refreshing.

The new stuff matters

Lumo just hit version 1.3, and they added something called Projects. Honestly, this is the feature that made me actually use it consistently. You create encrypted workspaces for different projects—a client portfolio, a book you’re writing, research for a business plan—and Lumo remembers everything: your files, your conversations, the specific tone and focus you want. You stop repeating yourself. The AI actually has context.

It’s small, but it’s the difference between a tool that feels like a toy and one that actually fits into how you work.

Why this matters now

Here’s what bothers me most about the Stanford findings: none of the six companies even tried to hide this. They just buried it in dense policy documents nobody reads. The researchers recommend federal privacy regulation, affirmative opt-in instead of opt-out, and filtering personal information from chats by default. That won’t happen anytime soon.

So we’re left choosing. Every time you type something into an AI, you’re making a choice about what happens to your words. Most of the time, you’re trading privacy for convenience without knowing it. Lumo is one of the few tools where that trade isn’t baked in.

It won’t hallucinate answers. It won’t browse the web without your permission. It won’t remember everything perfectly. But it’ll keep what you tell it actually private. In an ecosystem where the default is surveillance, that’s becoming less of a luxury and more of a necessity.

Exit mobile version