Frequently Asked Questions
General Questions
- What is a “Secure Computing Interface” (SCI)? +
The Secure Computing Interface (SCI) is a computing peripheral that mediates human interactions with the main computer (laptop, phone, etc.). It manages sensitive data and security functions, in addition to normal input/output interactions with the computer. In more human terms, it’s a device that manages things like passwords or encrypted messages, that connects to your existing computer. It connects to the main computer, but is not actually part of it.
- What does an SCI look like? +My very first hacked-together version of it, from years ago, was a USB dongle that sat between the keyboard and the computer. I called it a “reverse key logger” because it encrypted what was typed rather than recorded it. Later, I embedded that function directly in a keyboard. I have since been developing that version of the product for public release. Future versions could be packaged in a wrist watch, possibly replacement screens or cases for smart phones, and, soon, a bluetooth headset. But the core idea is to embed the cryptography engine into a physical device that sits between the human and the computer (or phone). The peripheral should be open-source (hardware and software), standards-compliant, interoperable and, ideally, a form factor that already exists and people use.
- Doesn’t something like this already exist? +To some extent. There are some great products out there now (Yubikey, Mooltipass, Trezor, Precursor, etc.) that solve certain aspects of the problem. Overall, Precursor has the potential to be the best of them and I’m a huge fan of their work. What I think is important in a solution is that it can serve multiple, core security needs that humans have, that it works with their existing technology, and ideally, reduces friction where possible. My approach is to put related, core security functions together in the peripheral that the human already uses to access them. Or more simply, do the encryption where they type, talk, or see, rather than all in one CPU basket with the other eggs.
- How is it different than a TPM? +Current Trusted Platform Modules (TPM) are typically a security chip that, if implemented properly, helps the platform provider ensure that their software runs securely. That sounds great, in theory. The core issue, however, aside from implementation failures and scope limitations, is that it serves the company that provides the platform, not the human user. It’s both complex, and masked from the user, by design. In an ideal world, the user benefits from the company maintaining control of their device, but the world is not ideal. A secondary issue is that it’s embedded in the computer or phone and can’t be used with another device. A TPM chip is not transferable to a new device or across platform providers.
- Why is this more secure than my computer/phone? +A few reasons, first and foremost being simplicity. Complexity is the enemy of security and reliability. We designed a system, from the hardware up, dedicated to a small set of functions. It is far easier to make it reliable and verifiable. It’s also easier to leverage electromechanical security features in the systems architecture, that can’t easily be hacked in software. The system also employs trusted open-source cryptography libraries combined, in a belt-and-suspenders approach, with a hardware cryptography module. And lastly the entire system, from hardware to software, is open source, which makes audit and testing easier and transparent. It’s designed and manufactured in the US with careful selection of components in regards to supply-chain security. The final element, of course, is third-party audits, which are expensive, but part of the planned launch of the product.
- What problems will this type of peripheral solve? +The current design supports authentication and text cryptography. In simple terms it keeps your passwords and two-factor credentials secure (and portable), and allows you to encrypt notes or text messages. It keeps these functions private, even if the computer or phone you are using is hacked. It can also be used for a variety of other edge use-cases, such as signing public messages (i.e. signed Twitter posts) or one-way reporting. Future versions will support voice and picture cryptography, as well as other remote authentication functions. A secondary benefit is that it’s portable. It can be used with any host (computer, phone, tablet, etc.) that accepts a keyboard. It can move from your home computer to your work laptop, from an iPhone to an Android tablet, and all of your passwords and encryption keys move with it.
- What problems won’t it solve? +This particular implementation will keep what you say private, but not who you say it to (or how it is delivered, with what frequency, etc). It won’t keep your session cookies from being hijacked. It wont stop websites or ISPs from tracking your internet usage and search terms. It won’t stop adversaries from fooling you into logging in to a fake website with your real password. It won’t stop the cloud services themselves from being hacked. It wont protect your computer or phone from being hacked. Basically, there are a lot of other problems to solve. We have another product line in development, that will sit between your computer/phone and the outside world, which will help with some of these other problems. But this first product is meant to do a small set of tasks and do them reliably. This tool simply gives you a degree of control over what you say (type) and helps you prove who you are. You can choose to keep messages private, even from the platform providers. You can prove that you are indeed the one that wrote a public message, even if the platform is hacked. You can more reliably prove who you are and a platform can’t lose (or confiscate) all of your credentials.
- Why do we need an SCI? +4) Government surveillance laws (or inversely, lack of privacy laws) can work against you. First and foremost the Big Tech companies who control the software and hardware, are driven by laws and market forces. If you are journalist in Hong Kong, a protester in Canada, an environmental activist in the EU, a humanitarian worker in Ukraine, or a woman looking for an abortion in certain states in the US, the laws may work against you. The Big Tech companies have to give you up to whoever makes the laws, right or wrong. On the other hand market forces and lack of privacy laws, may be such that you don’t even have the option to buy your privacy from advertising networks and data brokers. These can be used by governments or other bad actors (foreign or domestic) against you as well.
3) Big Tech does not have your best interests at heart. They care about security, primarily from their point of view. They need to comply with laws, serve advertisers or partners, and reduce support costs. That means that they need to ensure that they control your computer and it only runs their software, or at worst, software they approved of (ideally only then if it has paid them their gatekeeper tax). Your privacy and security are secondary concerns, in actuality, despite the best intentions of many Big tech employees. In today’s market, even when you are paying, you are still the product, not the customer. And yes, that includes Apple as well.
2) Complex systems are unreliable at scale, over time. Adversaries can, do, and will continue to crack the complex computing systems that people use. While software security is improving, it can’t keep pace with the change and growth of the ecosystem. As well, you generally can’t solve flaws at an abstraction level above them (i.e. you can’t “app” your way out of an OS compromise, etc.). Worse still, the main vendor may not control the hardware or network stack the devices run on, much less the supply chain that builds it. In short, even if Apple really, truly wanted to keep your privacy, NSO Group, criminals, or other adversaries, can, and will, find cracks to exploit.
1) Humans have an inherent right to privacy. In the US, the framers of the Constitution made an explicit point of this, in the context of the technology of the time. Other countries may have similar laws (although often with more clauses about government exceptions). Regardless of the robustness of the legal framework in any given country, the reach of current technology has far exceeded the grasp of protection laws. The right for a human to have a private conversation with another is akin to their right to breathe “community” air. It was previously unimaginable that it could actually be controlled or monitored by third parties. Technology is rapidly enabling levels of surveillance and control that weren’t conceivable in the past. Protection laws are a band-aid at best, at a higher level of abstraction than the technological reality. Like TPMs, laws primarily serve their makers, benefiting you, as a secondary effect (hopefully), and then only if implemented correctly and completely. And that is ignoring bad actors who aren’t deterred by laws. - Why should I trust you with my data? +You don't need to. And we don’t have access to your data. Nor do we want it. The point of implementing this as a hardware peripheral is that the user owns the device and the data on it. Completely. The device should be portable and work with whatever technology they have without asking our permission, or that of the platform providers. It doesn't require internet access to work, nor does it have any networking chips or firmware in it. The implementation of the device, both hardware and software, are open-source and verifiable. We know that most users won’t have the technical capacity to audit the system themselves, which is why we will be working with external third-parties to audit and verify the system. We have also attempted to mitigate supply-chain attacks and upstream risks in it’s design. And lastly we are very open about who we are, how we operate, and what our interests are. Our business model is selling products and services to our customers, not their data.
Product Questions
- What if someone steals my SCI? +The internal data store is always encrypted. In order to access the authentication or cryptography functions, the user must first type in their passphrase, which is then used to decrypt data as needed. It doesn’t store this passphrase in plain text anywhere on the device, and its wiped from memory as soon as the user is done using it. Because the input mechanism is embedded in the system, an adversary can’t slip in a key logger, or read the passphrase from memory or system calls, as they can on a computer or phone. In this way it’s also more secure against theft than USB keys or smart card chips. In short, an adversary needs to steal your keyboard and know your passphrase in order to get any of the data stored on it. The user can also export encrypted back-up files of their data (and import back-ups). That feature can be used to sync multiple devices in addition to recovery from loss/theft. And of course, because it’s open, standards-based cryptography, this also gives the user the ability to import their data into other systems if desired.
- Sounds cool! Can I get that as an app? +Errr.. no. There is no way for us to safely provide a full featured SCI, as software that runs on a Big Tech device. We will be offering “helper” software that streamlines some functions, improves integration, or enables certain features, but nothing that compromises the core security of the SCI. To date, the security promises made by software vendors have always had hidden assumptions and exception clauses about their reliability. Assumptions and exceptions that adversaries continue to exploit. We won’t make those same mistakes. But...
- If I’m encrypting messages, does the other party also need an SCI? +They should, otherwise an adversary can simply compromise their device to read the messages. A big part of the strength of end-to-end hardware encryption is that it’s actually end-to-end hardware encryption... That being said, there are scenarios where the risk may be required, acceptable, or mitigated by other means. In those cases, since the encryption is standards-based and open-source, you could have software (yes, even an app) perform the decryption functions on the other end, assuming you shared the key with the counter-party. One common, theoretical example of this may be something like a corporate email archiving gateway, that needs to decrypt the messages for compliance reasons. We will provide a software reference implementation for people or organizations that choose to do that. And possibly other packaging or services to support customers who need it.
Technical Questions
- What cryptography does it use? +The Anigma double encrypts using two different methods. The first is an open-source software library called LibHydrogen. It is a well respected and audited software library that supports secret-key encryption and public-key signing. This encryption is accomplished on the internal systems Arm Cortex M4 MCU. The second pass of encryption is accomplished in a hardware chip from Infineon. This security chip provides AES 256 encryption as well as other functions. By using two separate processes for encryption, the risk of flaws or backdoors is reduced.
- What is it's source of entropy? +The internal system MCU (ATSMAD51) provides a built-in hardware TRNG. The hardware security chip (Optiga M) also provides a built-in hardware TRNG. These two sources are combined and whitened prior to use as an entropy source. The two hardware vendors, Microchip and Infineon, are US and German companies respectively. By using different hardware sources, from different vendors, and then post-treating the data, the risk of doping or backdoor attacks on these hardware sources is reduced.