EFF Secure Messaging Scorecard

Which apps and tools actually keep your messages safe?

In the face of widespread Internet surveillance, we need a secure and practical means of talking to each other from our phones and computers. Many companies offer “secure messaging” products—but are these systems actually secure? We decided to find out, in the first phase of a new EFF Campaign for Secure & Usable Crypto.

About

For years, privacy and security experts worldwide have called on the general public to adopt strong, open-source cryptography to protect our communications. The Snowden revelations have confirmed our worst fears: governments are spying on our digital lives, grabbing up communications transmitted in the clear.

Given widespread government surveillance, why don’t people routinely use tools to encrypt their communications? Wouldn’t we all communicate a little more freely without the shadow of surveillance?

It boils down to two things: security and usability. Most of the tools that are easy for the general public to use don’t rely on security best practices--including end-to-end encryption and open source code. Messaging tools that are really secure often aren’t easy to use; everyday users may have trouble installing the technology, verifying its authenticity, setting up an account, or may accidentally use it in ways that expose their communications.

EFF, in collaboration with Julia Angwin at ProPublica and Joseph Bonneau at the Princeton Center for Information Technology Policy, are joining forces to launch a campaign for secure and usable crypto. We are championing technologies that are strongly secure and also simple to use.

The Secure Messaging Scorecard examines dozens of messaging technologies and rates each of them on a range of security best practices. Our campaign is focused on communication technologies -- including chat clients, text messaging apps, email applications, and video calling technologies. These are the tools everyday users need to communicate with friends, family members, and colleagues, and we need secure solutions for them.

We chose technologies that have a large user base--and thus a great deal of sensitive user communications--in addition to smaller companies that are pioneering advanced security practices. We’re hoping our scorecard will serve as a race-to-the-top, spurring innovation around strong crypto for digital communications.

Methodology

Here are the criteria we looked at in assessing the security of various communication tools.

1. Is your communication encrypted in transit?
This criterion requires that all user communications are encrypted along all the links in the communication path. Note that we are not requiring encryption of data that is transmitted on a company network, though that is ideal. We do not require that metadata (such as user names or addresses) is encrypted.

2. Is your communication encrypted with a key the provider doesn't have access to?
This criterion requires that all user communications are end-to-end encrypted. This means the keys necessary to decrypt messages must be generated and stored at the endpoints (i.e. by users, not by servers). The keys should never leave endpoints except with explicit user action, such as to backup a key or synchronize keys between two devices. It is fine if users' public keys are exchanged using a centralized server.

3. Can you independently verify your correspondent's identity?


This criterion requires that a built-in method exists for users to verify the identity of correspondents they are speaking with and the integrity of the channel, even if the service provider or other third parties are compromised. Two acceptable solutions are:
  • An interface for users to view the fingerprint (hash) of their correspondent's public keys as well as their own, which users can verify manually or out-of-band.
  • A key exchange protocol with a short-authentication-string comparison, such as the Socialist Millionaire's protocol.
Other solutions are possible, but any solution must verify a binding between users and the cryptographic channel which has been set up. For the scorecard, we are simply requiring that a mechanism is implemented and not evaluating the usability and security of that mechanism.


4. Are past communications secure if your keys are stolen?

This criterion requires that the app provide forward-secrecy, that is, all communications must be encrypted with ephemeral keys which are routinely deleted (along with the random values used to derive them). It is imperative that these keys cannot be reconstructed after the fact by anybody even given access to both parties' long-term private keys, ensuring that if users choose to delete their local copies of correspondence, they are permanently deleted. Note that this criterion requires criterion 2, end-to-end encryption.

5. Is the code open to independent review?

This criterion requires that sufficient source-code has been published that a compatible implementation can be independently compiled. Although it is preferable, we do not require the code to be released under any specific free/open source license. We only require that all code which could affect the communication and encryption performed by the client is available for review in order to detect bugs, back doors, and structural problems.

Note: when tools are provided by an operating system vendor, we only require code for the tool and not the entire OS. This is a compromise, but the task of securing OSes and updates to OSes is beyond the scope of this project.

6. Is the crypto design well-documented?
This criterion requires clear and detailed explanations of the cryptography used by the application. Preferably this should take the form of a white-paper written for review by an audience of professional cryptographers. This must provide answers to the following questions:
  • Which algorithms and parameters (such as key sizes or elliptic curve groups) are used in every step of the encryption and authentication process

  • How keys are generated, stored, and exchanged between users

  • The life-cycle of keys and the process for users to change or revoke their key
  • A clear statement of the properties and protections the software aims to provide (implicitly, this tends to also provide a threat model, though it's good to have an explicit threat model too). This should also include a clear statement of scenarios in which the protocol is not secure.

7. Has there been an independent security audit?
This criterion requires an independent security review has been performed within the 12 months prior to evaluation. This review must cover both the design and the implementation of the app and must be performed by a named auditing party that is independent of the tool's main development team. Audits by an independent security team within a large organization are sufficient. Recognizing that unpublished audits can be valuable, we do not require that the results of the audit have been made public, only that a named party is willing to verify that the audit took place.

See: https://www.eff.org/secure-messaging-scorecard
 
Back
Top