A Ledger Nano {X, S} re-implementation · g/ianguid/o.today
Skip to main content

A Ledger Nano {X, S} re-implementation

I love small hardware devices, and in particular those aimed at making my life easier.

Among them, there’s the Ledger Nano series of products.

They act as small hardware security modules with just one objective: store your cryptocurrencies safely, away from the internet.

Ledger Nano {X, S} devices — to which I’ll refer to as “Nanos” from now on — brought lots of excitement in the community back in the day because users could tailor their wallets to their own needs, by installing apps.

Apps hold all the nifty bits that lets you sign and broadcast transactions, while the operating system acts as a broker for cryptography-intensive applications: apps ask the OS for signatures or public keys.

Not gonna lie, this is a killer feature and nowadays not many members of the competition have a comparable usability enhancement.

There’s nothing wrong with Ledger products — I own a Nano S/X and one of the old “glorified smart card” models – besides the fact the operating system they embed in them is not yet completely open-source.

I’m saying “yet” here as an act of faith more than anything else, since the community has been asking for full codebases for years now.

Ledger partially obliged by releasing bits of the Nano S firmware, and apps are completely open-source.

I’ve been dabbling in hardware security for some time, so I figured I could try my hand at designing a Ledger Nano S/X-compatible hardware wallet or at least a subset of it!

If this statement sounds daunting, that’s because it is.

Goals #

Aiming at 100% compatibility is frankly past my pay grade for now, so I decided to work targeting the following goals:

  1. have a sensible architecture laid out
  2. be able to easily simulate everything on a Linux machine
  3. support just one chain, and broadcast a transaction on a live network

So it’s more like a fancy PoC than a complete, user-friendly solution.

Buyers beware πŸ˜„.

Point two was quickly taken care of by re-using parts of the libusbgx-based code I wrote for fidati .

Point three is easily accomplished: I chose Cosmos since at the time of writing this post I work in the field and have a pretty good knowledge of how things work, including the nasty bits related to transaction signing.

Where do I start? #

With those goals in mind, I had to start somewhere.

Since my aim was being compatible with existing Ledger tooling, all I needed to do was speak the same language and adopt the same conventions.

Two obvious facts came to my mind:

  • Nanos communicate via USB, usually
  • No driver needed

Those facts place Ledger devices in the magic realm of USB HID devices, which is great since I have experience with them already.

Nano X can interact with a client device via Bluetooth, but that functionality is out of this project’s scope.

Every USB HID device have a descriptor, so the first step was obtaining the one Nanos use to identify themselves with hosts.

With help from lsusb, this was quickly accomplished:

This verbose description translates to the following byte array:

var LedgerNanoXReport = []byte{

Thanks to fidati I had most if not all the code needed to let the USB host know a Ledger Nano X was connected on the bus.

The next step was understanding what’s the Nanos “language”, the USB HID framing used during the communication phase.

Going black-box here would’ve taken me weeks1, so I took a detour to the hacker’s best friend, Google.

Sure enough, this search yielded a Ledger GitHub repository containing JavaScript code used to speak with Nanos.

After a couple of hours I was able to talk with a blatantly dumb ledgerjs-based test script, hence virtually able to have a chit-chat with many giants of the cryptocurrencies space like Keplr and MetaMask.

In essence the HID framing Ledger implemented is a simple session-based framing with two kinds of frames, which I titled HIDFrame and HIDFrameNext2:

type HIDFrame struct {
        ChannelIDInner   uint16   // 2 bytes
        TagInner         uint8    // 1 byte
        PacketIndexInner uint16   // 2 bytes
        DataLengthInner  uint16   // 2 bytes
        DataInner        [57]byte // 57 bytes

type HIDFrameNext struct {
        ChannelIDInner   uint16   // 2 bytes
        TagInner         uint8    // 1 byte
        PacketIndexInner uint16   // 2 bytes
        DataInner        [59]byte // 57 bytes

A HIDFrame identifies the beginning of a new session.

Depending on application data, a command can be executed in the span of one HIDFrame and zero or more HIDFrameNext.

Since there’s no way of knowing what kind of HID frame is contained in a USB HID data byte are USB HID data byte array, the device must keep track of the current session and deny spurious packets as they come.

A session ends when the device responds to a command, either successfully or not.

Now that framing is done, I moved my attention to applications.

Apps, or something similar #

As I said earlier Ledger devices differentiate themselves by being able to support many cryptocurrencies, in the form of client apps.

Cosmos is the one I chose, so I went over the ledgerjs codebase again to understand how the communication happens.

HID framing encapsulate a dialogue between the app running on the device, and the client on the USB host.

Even though client initiate the communication, the device is the session arbiter tasked to enforce only one live session at a time.

Once a session has been established and the client sent over all data, an application gets chosen based on the session tag — represented by the TagInner field in HIDFrame — and the real processing happens.

After this phase finishes, the HID framing layer packs the response up accordingly and sends the data back.

The protocol is very simple, and suits low-power devices perfectly.

An unfortunate news: apps can do whatever.

Usually apps will communicate via APDU packets, but there’s no strict guarantee that this will happen… You could even use Protobuf if you really wanted.

The Cosmos application uses APDU, so the job was kind of easy: read the client code, implement the server code on the other side.

It also contains just three commands:

  1. GetVersion, which returns the Cosmos app version.
  2. SignSecp256K1, who signs a byte blob with the internal private key and selected derivation path.
  3. GetAddrSecp256K1, that returns the bech32-formatted address for the given human-readable part at the selected derivation path.

Interestingly enough, the official Cosmos application refuses to process any command that involves a derivation path different from m/44'/118'/0'/*/ — you can change just the account index.

This leads to Cosmos SDK-based projects building their own Ledger apps just because they chose a coin type different from 118, even though the cryptography involved in address derivation and signature is the same.

Go figure… Anyway.

Ledger doesn’t let apps access the private key directly but instead exposes a signature API which can be used to work with an opaque “secure element”.

Signatures are expected to be ASN.1 encoded in DER format, and luckily that’s exactly what the btcec Go library exposes: nice!

Bech32 encoding has also been handled by standard libraries - same ones used by Cosmos SDK under the hood - to guarantee compatibility.

Enclaves and hardware #

Up until this point all the development happened on my Linux computer, and revolved around getting to know Nanos a bit better.

Now the fun part begins!

Nanos have a kinda neat architecture in which two chips are present: one holds and operates with private keys while the other one executes the main OS, handles button and screen and so on3.

For this PoC to be viable we have to discard this architecture though, because I don’t have the skills or the money to design and produce such PCB design.

I resorted to the usual suspect: a USB Armory Mk. II single-board computer, and Tamago.

While the Mk. II have either one or two secure elements on board - depends on what version you have - I wanted to experiment further with my design.

Enter GoTEE , which in essence allows you to execute Tamago unikernels in ARM TrustZone mode - pretty cool if you ask me!

The idea then is simple:

  • run a Tamago unikernel in Normal World, which will handle just USB
  • another unikernel in Secure World, which will do literally everything else

Normal World is completely locked down: no UART, no cryptographic accelerators, no DCP, it can’t even log stuff without going through secure world: we want two separate environment in which one of them is the undisputed king.

To recap, here’s what happens:

sequenceDiagram actor B as Bob participant KW as Keplr Wallet participant NW as Normal World participant SW as Secure World B ->>+ KW: RequestSignature(transaction) Note left of KW: Keplr waits until the whole flow finishes activate NW KW ->>+ NW: USBSend(payload) NW ->> NW: USBParse(payload) NW ->> NW: BuildSignRequest(payload) NW ->> SW: SendSignRequest(payload) deactivate NW activate SW SW ->> SW: ParseSignRequest(payload) SW ->> B: AskConfirmation(transaction) Note right of B: User interacts through on-device PIN pad/screen B ->> SW: Confirm(transaction) SW ->> SW: Sign(transaction) SW ->> NW: Return(signedTransaction) deactivate SW activate NW NW ->> NW: USBAssemble(signedTransaction) NW ->> KW: USBSend(signedTransaction) deactivate NW KW ->>- B: ReturnTransactionDetails()

For the purpose of this PoC key derivation happens in a deterministic way: there’s a slice of bytes used as entropy, and every device will yield the same private key4.

The secure enclave for this project should be able to provide a trustable execution space for sensible code paths, but should also be a source of enough cryptographically-secure entropy to be able to deterministically generate safe private keys.

On top of that the device must be in a verified consistent and secure state from the boot onwards: if it booted, then it must be secure and was not tampered with.

Admittedly if my end goal was to release a commercially-available device to compete against Ledger or Trezor, solving the supply chain trust problem should’ve been my first thought - considering this system will be a hobbyist toy at best, I ultimately decided to leave this problem out of my scope: users will have to source a capable device themselves and compile/flash their own firmware.

This consideration will not stop me from daydreaming a full-blown verified boot chain though πŸ˜„.

Many issues should be kept in mind when thinking of verified boot chains, namely:

  • Who holds the private signature keys?
  • Is this individual savvy enough to keep them safe?
  • Why that individual in particular?
  • Any 0-day going around that targets your hardware platform?

And many more.

The rest of this post describes an ideal situation where the keyholder is savvy enough, and the hardware was designed by God themselves - quite unrealistic but fun to reason about.

The USB Armory Mk. II is capable of verified boot, where users are in charge of holding root signature keys and fuse them onto their SoC’s - there’s plenty of documentation that explains how to do that and what to pay attention to.

This means the user is in charge of choosing what boots on their own device and what doesn’t: what a powerful feeling!

The boot process consists of the following elements:

  1. the i.MX6 SoC BootROM
  2. armory-boot , first-stage bootloader
  3. Secure World firmware
  4. Normal World firmware

The user will need the following sets of key pairs:

  1. i.MX6 BootROM key pair, used to sign armory-boot
  2. a minisign key pair to authenticate the armory-boot configuration file
  3. another minisign key pair to verify Normal World firmware

In this context we’re trusting no bad actors were involved in the fabrication of our SoC, hence the BootROM is used as-is.

Shortly after verification the device starts booting the application processor firmware, which in this case is the armory-boot bootloader.

armory-boot reads its configuration file off disk, checks it for integrity by verifying its cryptographic signature and proceeds to boot Secure World firmware only if:

  • the configuration file signature is correct
  • the Secure World firmware yields the same SHA-256 hash as the one specified in the configuration file

Once booted, Secure World sets up the stage for the Normal World and check its signature before loading and executing it.

Normal World firmware is embedded in the Secure World’s one right now, but since this might change in the future I thought it’s best to check it anyway just to have processes already in place when the situation will change.

flowchart BR(i.MX6 BootROM) AB(armory-boot) SW(Secure World firmware) NW(Normal world firmware) EXEC(((Ready!))) FAIL{Boot failure} BR -- OK signature --> AB BR -- bad signature --> FAIL AB -- bad signature --> FAIL AB -- OK signature --> SW SW -- OK signature --> NW SW -- bad signature --> FAIL NW --> EXEC

Private key management #

At this point we have a fully verified boot stack, and we can mostly trust the hardware is running the software we’re expecting.

How does the device:

  1. Generates safe entropy?
  2. Stores secrets safely?

i.MX6 provides facilities to do all sorts of fancy cryptography stuff, like generating cryptographically-secure entropy: point 1 is solved.

Point 2 is harder to reason about: storing secret is hard, but good hardware choices pay off on the long term.

The i.MX6 family of SoCs found on USB Armory Mk. II contain a terrific piece of silicon called OTPMK which can be seen as a hardware-based secret key: you cannot read the OTPMK directly, it can only be used through the i.MX6 DCP subsystem.

If configured correctly the DCP encryption subsystems works off a slice of SoC-internal RAM that is not available to the Tamago Go runtime, meaning that everything written in there is for DCP eyes only.

Adding to this OTPMK is not available unless the device is in a verified boot state.

The Tamago library contains all the methods needed to interact with DCP and the OTPMK, making building safe systems on top of hardware primitives quite easy.

Generating a BIP-32 secret could be done by combining the hardware RNG along with the security properties of DCP and OTPMK.

Once enough entropy has been gathered to generate said secret the user inputs a sufficiently secure password through a keyboard - conveniently wired to Secure World only - which is then encrypted by using OTPMK as key.

The newly-yielded data is used as key to encrypt the secret generated before: you either know the original password, or you cannot access signature capabilities of the device.

flowchart S(Secret entropy) P(User password) KEK(Key encryption key) HWRNG(Hardware RNG) OTPMK(OTPMK) DCP(DCP AES function) SDCP(DCP AES function) USK(Encrypted BIP-32 secret key) HWRNG -- generates --> S OTPMK --> DCP P --> DCP DCP -- generates --> KEK KEK -- Encryption key --> SDCP S -- Payload --> SDCP SDCP --> USK

Decrypting the BIP-32 key means allowing the password holder to broadcast transactions with said private key.

flowchart P(User password) KEK(Key encryption key) OTPMK(OTPMK) DCP(DCP AES function) SDCP(DCP AES function) USK(Encrypted BIP-32 secret key) DUSK(Decrypted BIP-32 secret key) OTPMK --> DCP P --> DCP DCP -- generates --> KEK KEK -- Decryption key --> SDCP SDCP --> DUSK USK --> SDCP

To prevent password brute-force attacks an exponential back-off slowdown algorithm can be implemented in Secure World.

Conclusions and further work #

This design represents the first iteration of my toy Ledger Nano {X, S} re-implementation.

It’s a fun project, full of challenges and “treasures” to be found in the form of safe design choices.

You can find the codebase here if you want to play around with it.

Right now I shifted my focus towards other security-related topics like xous OS , and I will most probably try my hand at re-designing this concept on top of it.

A keyboard add-on is planned, but my PCB design skills are sadly near-zero, so presumably I’ll end up using SSH as mean of communication given it’s available essentially everywhere nowadays.

As usual, patches and security model-breaking commentary are welcome! πŸ˜„

  1. I’m not great at reverse engineering stuff… yet! ↩︎

  2. Yay naming! ↩︎

  3. Separation of concerns rocks! ↩︎

  4. Did I mention this thing is not ready for production? Don’t email me crying when somebody will steal tokens! ↩︎