BattlefyBlogHistoryOpen menu
Close menuHistory

Can we trust private keys?

Ronald Chen January 9th 2023

Trusting a private key? That doesn't make sense at first as the public/private keys in public-key encryption are typically generated by oneself and thus are inherently trusted. Right?

Public-key encryption is no longer in the realm of techies. The general public now uses it to hold cryptocurrencies. A private key can easily get compromised by being infected by malware. We download all sorts of executables, and while we are supposed to vet them, most people don't know how to.

In theory, app stores and package managers can protect us, but it only takes a single bad author/release/dependency to get compromised.

We only have real protection using a highly restrictive operating system such as iOS. But can we even trust that?

What can we even trust?

Let's consider what it takes to trust a private key we generated hasn't been compromised.

We'll start with macOS, which comes with BSD ssh-keygen. How do we know it isn't leaking our private key using a covert channel? How do we know it didn't generate a weak private key?

It is stepping into the realm of paranoia if we don't trust our operating system, but let's say we are that extreme. The source code for OpenBSD is available. We'll build it from scratch and then run ssh-keygen.

Bootstrap problem

But wait, how can we build an operating system if we don't trust any operating system to build it?

Maybe we can sidestep this issue and rewrite ssh-keygen to run as an UEFI app. UEFI is the code that launches the bootloader, which in turn launches the operating system, but one can write an entire UEFI application.

We don't need an operating system, but this hasn't solved the bootstrapping problem. How can we compile our UEFI app without an operating system?

What if we hand-wrote the assembly?

In theory, we could handwrite ssh-keygen into assembly to resolve the bootstrapping problem. We would probably only write enough to implement the underlying algorithm.

Let's say we did. Can we trust our private key yet?

Software secured, but...

So far, we've only talked about software, but can we even trust our hardware? How do we know our hardware doesn't contain a backdoor or is forwarding all instructions over a covert channel? Modern integrated circuits are so small and complex that it would be infeasible to inspect them.

What about older computer chips?

We are only generating a single private key, so it doesn't matter if it takes a while on slower hardware. What if we picked an old computer chip and inspected it under a microscope before using it? Even an old 4-bit processor such as Intel 4004 is already insanely complicated to verify.

What if we just built a computer?

Ben eater has shown how to build an 8-bit computer using only logic gates on a breadboard. We could build one and implement the key generation algorithm.

But one more problem...

Can we even trust the algorithm?

NIST recommends specific parameters for Elliptic-curve cryptography, but can we even trust them? People are have looked into it and haven't found any definitive proof. Still, no amount of evidence can prove the lack of a backdoor. Nor can we infer a backdoor must exist because NIST was involved.

The problem with trust is its necessity

So can we trust private keys? Technically no.

Have we been compromised? Probably not.

While we don't have any real security, we can at least feel safe knowing probably nobody cares about you specifically, yet. If they did, then no amount of technology will save you from a $5 wrench (

Do you want to find that pragmatic approach to security? You're in luck, Battlefy is hiring.



Powered by