Andrés only asks about the weather
Andrés is Venezuelan. He's been working at a neighborhood fruit shop for years. One day I asked him how his family was doing back home, during the regime's worst moments.
"It’s always sunny in my country," he told me.
I didn't get it. I kept asking. And then he explained: "I can only talk to my family through WhatsApp because calls don't work well. But you have to be very careful about what you write. We don't know if someone can read the chats. What we do know is that they can arrest anyone at any time, and the first thing they do is open their phone. If you don't give the PIN, it's slaps and a cell until you do. And if they find something they don't like on WhatsApp, with any luck, it's a beating and a few days in a cell. With bad luck, that person just disappears."
"That’s why, when I talk to them, I basically just ask about the weather. If they answer, at least I know they're alive."
Andrés isn't a criminal. He has nothing to hide. But he lives in a world where a single phrase in a chat can destroy the life of someone he loves.
You don't need to be a criminal to need privacy
Think about a lawyer talking to a client about a defense strategy. The conversation is legitimate and legal, but it contains information that, taken out of context, could be devastating. That lawyer has a professional and legal duty to keep that conversation confidential.
Think about a young couple. She lives with her parents. They have intimate, completely legitimate conversations that belong to their most private sphere. They have the right for those words not to exist on any server that could be hacked, sold, or legally subpoenaed.
Think about a freelancer talking to their accountant about optimizing taxes. They might be on one side of the line or the other—that's their business. If they were sitting in an office, no one would hear that conversation. Why should it be any different if they're talking from a distance?
Or think about a journalist in Iran, with missiles falling around them, trying to communicate with their newsroom in Paris. Or an immigrant in Madrid talking to parents who stayed behind.
All these people need privacy. None of them are criminals.
The trap of perfect encryption
In 2018, the FBI created a company that sold encrypted mobile phones. The brand was called Anom. It was marketed as the most secure alternative on the market. For three years, over 12,000 devices were distributed across more than 100 countries. Users spoke with total confidence.
What they didn't know was that every message also reached FBI servers. Every word. Every photo. Every plan.
In June 2021, Operation Trojan Shield went public. Over 800 arrests in 16 countries. It was the largest coordinated police operation in history.
It wasn't a technical failure. The encryption was real. The technology worked. The problem was who was behind it and what they stood to gain.
This isn't an isolated case. For over 50 years, the Swiss company Crypto AG sold encryption machines to more than 120 governments. What no one knew until 2020 was that Crypto AG was secretly owned by the CIA and the German intelligence service. The machines worked, but with a deliberate weakness that allowed their real owners to read everything.
Iran, India, Pakistan, the Vatican, Latin American military juntas. They all trusted it. None of them asked why someone was so interested in selling them cheap encryption.
The question you should always ask
If someone offers you something and you don't understand what they're getting in return, don't trust it. Not because everyone has bad intentions—but because understanding the business model is the most basic way to evaluate whether you can trust a service.
When you use WhatsApp, you know what Meta gets: your data, your habits, your attention to sell ads. You might agree with it or not, but at least you understand the trade-off.
But when someone offers you an encrypted communication service—completely free, no ads, no subscription, and no visible business model—the question isn't whether the encryption is good. The question is: who is funding this and why?
What really matters
There are signs that help evaluate a privacy tool. Open source, security audits, European jurisdiction. These are all positive. But none of them are an absolute guarantee.
Open source means someone can review what the app does. But let's be honest: 99.9% of users will never read a line of code. And history is full of massive vulnerabilities that lived for years in open-source projects reviewed by thousands of people without anyone noticing.
Security audits are valuable. But audits cost money, and money is the simplest way to buy influence. An audit says the code was clean on the day it was reviewed. It says nothing about what was changed afterward.
You can have the best code in the world, audited and open, but if your data passes through a server—even for a second, even if encrypted—someone has physical access to that server. And that someone could be in a country where a judge, a government, or a big bribe can open any door.
What really protects you isn't a promise that "we don't read your data." What protects you is an architecture where your data never leaves your hands. Where there’s no server to compromise, no backup to leak, and no backdoor to open.
Trust isn't a gift
Anom users trusted it because the product worked. Crypto AG clients trusted it because the brand was respectable. Andrés doesn't trust WhatsApp, but he has no alternative.
Trust in a privacy tool can't be based on it "working well." It has to be based on understanding who is behind it, what they gain, and what happens to your data if that company closes tomorrow, changes owners, or receives a court order from a country that isn't yours.
Next time someone recommends a secure messaging app, don't look at the features or design first. Look at who’s paying for it. If the answer doesn’t convince you, look for another one.