Ben Thompson of Stratechery recently wrote a blog post arguing that open, federated messaging systems are inherently less secure than their closed counterparts. I disagree with his opinion.
It has been claimed that open source, federated messaging systems are inherently less secure than their close source counterparts - this claim is patently false and here is why.
Closed systems such as Facebook Messenger and WhatsApp give you “managed” end-to-end encryption - they generate and store both the public and private key for you. In managed encryption, the public key is kept on the system owner’s servers, while the private one is stored on your device under the direct control of the closed application. You then have to trust that they will create and manage these keys properly and most importantly, securely - which given how blaise Facebook have been with all your personal information recently, certainly isn’t a given.
In contrast, open systems can allow full, meaningful end-to-end encryption with your own keys. Even if this requires more effort to consumers, as they have to generate and manage their own keys, the result is more trustworthy. For non-technical users, open messaging systems will manage your keys for you like the closed ones, but they will provide a much higher degree of transparency on what they actually do with them.
In the end, a “walled garden” messenger is only as secure and trustworthy as its owner. In open systems, you can choose between multiple clients, some of which are open source and can be publicly checked for hidden tracking mechanisms and privacy violations. If you choose not to trust a provider, or they prove themselves to be untrustworthy, you can choose from multiple providers offering the same service.
In closed systems, you can only use the system owner’s applications, whose code and actual behaviour cannot be publicly verified. Following on from the Cambridge Analytica news, who can really assure you that a messaging company does not have backdoors to decrypt and analyze your communications.
Privacy is not a business model – it is a human right.
Privacy is not a business model, but a fundamental right that individuals enjoy and one that is necessary in order to preserve freedom and democracy. Even more so in our hyper-connected society which, without sufficient privacy protection, is rapidly turning into a world of generalised surveillance.
Even if respecting user privacy has a cost to a business, granting privacy only when it doesn’t impact revenue is unacceptable. It is not how things should work in an ethical society. Privacy rights are fundamental, and a business that doesn’t accept them should not be a socially acceptable business.
To this end, what we really need is privacy, not "privacy of a certain sort" as the post calls it. Closed systems can only provide privacy up to a certain point, which is limited by how much trust you can have in the system’s owner. This is especially true for companies from parts of the world that have shown scarce respect for privacy, either due to common business practices or to governmental pressure.
Everyone of us in tech has a social responsibility to be totally transparent and ethical with regards to personal user data. We all want to run commercially successful enterprises, but this cannot be paid for by the infringement of our right to privacy.
So, any company in this industry, including the big over-the-top players, is now expected to stop collecting and monetizing user information immediately, unless they have proper consent by the user. Our experience of the last few years is that openness and privacy can and do go hand in hand commercially, and it’s time that the likes of Facebook learn this lesson too.