In Defense Of (My Attack On) Hardware Wallets

December 28th 2020

Robert Spigler

In Defense of (My Attack On) Hardware Wallets

Section 1.0  Introduction

First off, I don’t ‘hate’ hardware wallets, nor their developers.  What I hate is security theater, and scams.  I hate security theater and scams because I have the guiding principles of security, privacy, and voluntary human action in all of my work.

I don’t hate the developers working on these projects, and I don’t mean to disrespect anyone discussed or linked to in this post.  Some of these developers have been involved in the Bitcoin community for nearly a decade, and have been contributing great work in addition to their hardware wallets.  But I do disagree with these developers, and I hope to make my argument clear here.

I started this research back in August of 2020 after receiving an invitation by Advance Tech Podcast [1] to discuss hardware wallets and Yeti [2] – JW, Will Weatherman, and my bitcoin sovereign custody solution.

Note: Unlike hardware wallet vendors, we make no money whether or not users decide to run our software.  It is free/libre open source software.  Yeti is a minimal Python script and UI layer run on top of Core.  Our goal is that eventually Core will pull in enough advances that Yeti will not be needed at all to achieve the easy, secure, offline, multisig that Yeti enables.  Core is currently working on this, and a lot of work has already been merged.  Some of the remaining work can be looked at here [3][5].  For this article, I will not mention our software until the very end, trying to keep it as strictly unbiased as possible.  My only goal is to advance the security practices of the Bitcoin community.

I also donate monthly to many Bitcoin Core developers, as well as support other important projects in the community to advance security and privacy (such as porting Qubes to the Power ISA) [6].  I have absolutely no financial gains from any of this. I do this because I believe in the goals of our community.

For the structure of this article, I will start with the security issues of Hardware Wallets (Section 2.0), (which I will at times abbreviate as HWW’s), refute arguments made in defense of Hardware Wallets (Section 3.0), move into solutions (Section 4.0), and pros/cons for those solutions (Section 5.0).

2.0  Security Vulnerabilities of Hardware Wallets

First, Hardware Wallet security vulnerabilities.  There are numerous problems with Hardware Wallets. I will start with describing physical security issues (Section 2.1), then dive into the historical technical issues (Section 2.2).  (I suggest skipping this if you are not technically inclined, but please note how long the list is).  Finally, I will discuss the inherent architectural issue with Hardware Wallets themselves (Section 2.3).


2.1  Physical (as in ‘real world’) Security Issues

When you order a Hardware Wallet, you are giving your private information to a company that you are ultimately having to trust.  This trust shows itself in many surprising ways - physical (as in ‘real world’, or ‘meatspace’), and technical (hardware/firmware/software).  By ‘meatspace’ security issues, I mean that when ordering a Hardware Wallet, you must provide the company with your physical address, name, phone number, and email address.  While there are a number of solutions to receiving packages without providing this vital information (I suggest reading Jameson Lopp’s ‘A Modest Privacy Proposal’ [7]) - the best ones cost a lot of money, and if you are going to such lengths to protect your physical privacy, you should be taking your cybersecurity as seriously as well (by not using a HWW).
You might be saying to yourself, “these are trusted security companies with great standing in the Bitcoin community, I’m sure they will keep my information safe!”.  However, this isn’t just hypothetical, Ledger just had this exact data breach in July of this year.  9,500 customers had their first and last name, address, phone number, and email exposed, in addition to almost a million other email accounts.

This is incredibly dangerous. There are real, physical security risks associated with owning Bitcoin.  Jameson Lopp has a repo [8] that documents known physical attacks against Bitcoin owners.  It includes muggings, stabbings, home invasions, kidnappings, torture, and murder, for as little as $1,000 worth of crypto to as much as a few million dollars.  This happens all over the world, with most of these attacks happening in the United States, Canada, and Western Europe.

This was just in one case when a HWW vendor was hacked.  At any time, a HWW vendor could decide to sell this data, there could be a change in leadership, or a government could force legal action.  With Bitcoin Core, there is no such data to be collected, no matter what force is taken. Segwit2x proved that Bitcoin Core has a decentralized development process unable to be corrupted by force [9][11].

With this data leaked, stolen, or sold, you are very vulnerable.  Adversaries can now target you with phishing emails, texts, or phone calls, they could target your home, or perform advance social engineering attacks.  An adversary can very easily scrape social media and public records to learn even more information about you.  This opens you up to more targeted attacks, such as SIM swapping [12].  SIM Swapping allows for account takeovers by targeting weaknesses in two factor authentication implementations.  Attackers can imitate you, take over your email, exchange and trading accounts, and transfer money that you thought was in your control.  There have been millions of dollars worth of bitcoin stolen from SIM swap attacks, and some from pretty well known people in the industry [13].

Note:  Interestingly enough, it is now 2020 EOY, and we have seen attacks on users who had their information leaked in this database.  I started researching this around August for the podcast, and now in December we are seeing many attacks in the wild just as I predicted at the time.  While these attacks are currently less complex as described above - most being carried out just through phishing emails - it still leads to complete loss of funds if successful [14].  Major bitcoin advocates continue to downplay the seriousness of the situation [15].

Note #2:  On December 20th, a database of the ledger leaks containing 1,075,382 emails and 272,853 orders with full details (Emails, Addresses, Phone Numbers) were leaked (this is over 28x the amount larger than Ledger initially claimed to have lost).  Apparently, the database had been making the rounds on hacker forums for nearly $100,000 – if you were doubting how valuable this information can be.  I won’t be linking to the leak as to not contribute to the spread.

Note #3:  As of December 23rd, there have been reports of SIM Swapping attacks, and targeted emails towards users threatening physical attacks.  This situation is likely going to continue to get worse.

2.2  Historical Technical Vulnerabilities

Let us now get into the lengthy history of security vulnerabilities of Hardware Wallets.  The most well-known and used HWWs are Trezor [16], Ledger [17], BitBox [18], and Coldcard [19].  KeepKey is another HWW which has been around for a long time, however it is a near complete clone of Trezor, so there is no reason to discuss it as well.

Hardware Wallets are advertised as enabling a user to securely store private keys and sign transactions even when the user’s laptop is insecure.  I will demonstrate throughout this section (and discuss further in the later section on inherent architectural issues (Section 2.3)), how this is fundamentally impossible.  Because of this, I will be focusing first on remote attack vulnerabilities (Section 2.2.1), which are Hardware Wallet’s single selling point.  I am doing this to demonstrate that the claim from HWW vendors, which is that HWWs enable you to sign securely from a malicious machine, is completely bogus.

Afterwards, I will focus on the (numerous) physical vulnerabilities disclosed regarding how private keys can be extracted from HWWs once physical access is attained (Section 2.2.2).  This opens up attack vectors such as Evil Maid attacks and Supply Chain attacks. No hardware wallet is immune against this, regardless of use of holographic stickers, or secure elements (even though they may advertise otherwise).  

If your computer is malicious, it is very likely that either the software wallet you are using is malicious or the data that the wallet is sending to the HWW is being intercepted and changed to malicious data – this is called a Man in the Middle attack, or MITM. (I believe that BitBox is the only HWW that tried to prevent MITM attacks along the USB by encrypting the data; however, they used unauthenticated encryption).

2.2.1  Remote Attacks

In 2014, Trezor had to fix a vulnerability where a malicious wallet/computer could send to the HWW a specially crafted transaction (one with a malicious ScriptSig) which would cause a buffer overflow and extract the private key [20].

A similar type of vulnerability happened ~7 months later (maliciously crafted transaction not confirmed on the HWW screen), except this time it would contain a change output owned by the attacker [21].

In 2018, BitBox was examined by Saleem Rashid, where he essentially called them “irresponsible maverick[s] with no regard for domain separation”.  They implemented BIP-32 so poorly that requesting the master public key of the public wallet revealed the master private key of the ‘hidden’ wallet; and requesting the master public key of the hidden wallet revealed the master private key of the public wallet.  In addition, the device did not have a screen, making confirmation of what the device was signing impossible (and IMO, worthless).  The designers tried to fix this by pairing it to an (insecure) smartphone as a second factor; however, there was no proper authentication process, and the pairing process was vulnerable to MITM attacks [22].

Trezor uses a USB cable to transfer data from the device to/from the computer.  This opens it up to many attacks, most of which we will get into in Section 2.3 (architectural issues).  However, it also makes Trezor more vulnerable to remote attacks, such as the one discovered in May of 2018 by Christian Reitter. Specially crafted USB packets could trigger a buffer overflow and lead to code execution [23].

In November of 2018, Sergey Lappo disclosed a vulnerability in Ledger’s hardware wallet that allowed an attacker to replace the change address with the attacker’s own address instead of the wallet’s derived address, without any confirmation or verification on the device.  Depending on the UTXO distribution in the addresses, even just spending a small amount could lead to near total loss of funds [24].

In addition, Ledger runs proprietary firmware, so no serious user should even consider running it, nor should any serious expert recommend it (although many surprisingly do) [25], [26] – but I will continue to include their vulnerabilities in this post for your information.

A vulnerability that affected Trezor & Ledger [27], BitBox [28], [29], and later Coldcard as well [30] is that the attackers could create a change or receive address that, while still owned by the user, would be derived on an arbitrary keypath that was not properly limited nor verified.  Essentially, the user would receive coins on an address that they would be unable to find and spend from.  The attacker could then hold the path to those funds ransom.  Rather than the keypath tree looking like:

m/44' /0' /0' /0 /0

It could look something like:

m/44' /0' /0' /519486735 /755295795     (without the user being aware).

Later, in October of 2019, it was disclosed that Trezor had a change output address vulnerability similar to Ledger’s, even though Ledger’s was disclosed nearly an entire year prior.  To be fair, this one was slightly more complicated (and only applied to their newer model, Trezor T).  Trezor at least had some checks on the change output, but if the attacker added a malicious input to the transaction, it enabled the attacker to bypass those incomplete checks, set the change output as a 1-of-2 multisig address controlled by both the attacker and the user, and then quickly transfer the user’s funds solely to the attacker’s control. It was a critical vulnerability – it allowed the attacker to steal all funds in the user’s account except for the transaction’s send amount [31].

And then, just a few months later, it was found that once again there was another change output vulnerability – this time with Coldcard.  Coldcard was not properly validating the transaction, and by using maliciously crafted script opcodes, an attacker could trick a user into sending change to an attacker-owned address [32].

In the beginning of 2020, it was discovered that Trezor never fully patched their 1-of-2 multisig change output vulnerability disclosed in October of 2019, and it had to be fixed again [33].

Ledger supports multiple different currencies, and claims to do so securely by ‘isolating’ individual apps for each currency.  However, researcher Monokh discovered in May 2020 that an attacker can prompt the Ledger device for a Litecoin/Bitcoin testnet/Bitcoin Cash/etc (list low value currency here) transaction, have the user confirm the transaction for that other currency, and actually spend the high value Bitcoin transaction, all without the user being aware [34].  Numerous different scenarios could end up stealing all your Bitcoin – trading on an exchange, trying out a new service, working as a developer with testnet coins, etc.  Even more frustrating, Ledger downplayed the issue, the researcher reached out for updates numerous times, and the issue remained unfixed for months (the researcher had previously contacted Ledger about the privacy related aspect of the vulnerability 18 months prior – if Ledger had examined it, perhaps they would have discovered and patched this more serious theft related vulnerability).  Only after the researcher finally publicly disclosed this, did Ledger do anything about it.

Near the end of 2020, developer and security researcher Benma disclosed that Coldcard was also vulnerable to the above isolation vulnerability (despite Coldcard originally stating otherwise).  Coldcard doesn’t support any altcoins, but they were vulnerable to sending real bitcoin instead of testnet bitcoin, despite showing and confirming to the user otherwise.  This vulnerability has been public for 4 months [35], a fix still hasn’t been released [36], and believe it or not, Coldcard just doesn’t believe social engineering is an attack vector that needs to be protected against [37].

This will never end.  There will be even more vulnerabilities discovered in 2021. Your funds will never be safe in a hardware wallet.

If you have made it this far, let us continue.


2.2.2  Physical Vulnerabilities

Here, I mean ‘physical’ as ‘hardware’, not as ‘meatspace’.  This is about how private keys can be extracted from HWWs once physical access is attained.  Physical access to a device is ‘game-over’, as detailed in Microsoft’s ‘10 Immutable Laws of Security’ - Law #3: “If a bad guy has unrestricted physical access to your computer, it's not your computer anymore” [38].  This is a security issue with all hardware devices – computers, laptops, and hardware wallets.  What honest security is about, is recognizing this issue, being honest with consumers, and making the time and money required for hackers to break into the hardware and boot process as significant as possible.  What security theater is about, is lying to customers and advertising your device as somehow being above Law #3, when in fact it is incredibly vulnerable and simple to break into when physical access is achieved.  Unfortunately, all HWW’s have chosen to travel on the path of security theater, instead of honest security.

If the device is easy to attack physically, users are vulnerable to losing their private keys through supply chain attacks, evil maid attacks or plain physical theft.  A supply chain attack is where a malicious actor sends a malicious device in place of the legitimate device.  There currently is no way to verify whether or not you have a genuine hardware wallet.  An evil maid attack is a hack that requires the user to activate the device and input some information before/after the attacker gains physical access.  Physical theft requires stealing the device after the user has generated a private key on the device.  Common attacks include side channel attacks and fault attacks.  While physical attacks are typically more complex and difficult to carry out than the remote attacks discussed previously, they are serious and should be considered in your threat modeling.  Evil maid attacks and physical thefts require more targeting than supply chain attacks, which can target users indiscriminately.

In March 2015 Jochen Hoenicke published the first successful physical attack against HWWs.  Previously, these discussions had been mostly theoretical.  Specifically, this attack required theft of the device after the user had generated their private keys.  Using a cheap oscillator (~$70), a side channel attack was performed by analyzing power consumption when generating public keys (therefore, the PIN was not needed).  The private key was successfully recovered [39].

Two years later, another theft attack on Trezor devices was disclosed.   The seed is saved in flash memory, but copied to RAM when in use.  Flash memory storage survives restarts, but RAM memory does not.  Trezor allows you to install your own custom firmware, but doing so requires a restart (to place the device into bootloader mode) and will erase the flash memory as well.  Turns out an attacker can install malicious firmware, and instead of restarting the device, perform a soft restart by shorting the device.  This keeps its RAM contents readable, and therefore, it's seed as well [40].

In the beginning of 2018, Trezor had a supply chain vulnerability. The microcontroller used in the Trezor (STM32F205) had undocumented write-protection flaws, essentially rendering them useless. This allowed an attacker to replace the bootloader through a malicious firmware update.  If the device was intercepted en route to you, or you purchased from a malicious reseller, this would allow the attacker to infect your device.  Trezor fixed this by having the newest firmware verify the authenticity of the bootloader, (the bootloader already checked the signature of the firmware).  They also implemented write protection through another unit of the chip [41].
Note: Tamper-evident seals are not enough to verify the authenticity of a device, unlike what is described in the linked post.  Also, even though the bootloader may be verified, there is no way to verify whether or not another supply chain attack has taken place – such as by inserting malicious chips. See Section 2.3 – architectural vulnerabilities.

In February of 2018, it was discovered that the firmware of the BitBox could be downgraded to older versions.  This opened devices up to the previously fixed remote BIP32 implementation hack [22].  To make it worse, the firmware version check happens in the bootloader, and the bootloader can’t be updated.  That means for all existing devices, there’s no fix.  This can only be fixed in newer shipped versions [42].

One month later, Saleem Rashid published one of the most infamous supply chain attacks against Ledger.  Ledger advertises that their devices are tamper proof, that there is no need for anti-tampering stickers, and that you can ‘safely’ purchase their devices from second-hand sellers, eBay, etc, because they have a secure element that will verify that the device is genuine.  So how does this actually work?

First off, Ledger uses a Dual architecture - a general purpose microcontroller (MCU) and a secure element (SE).   The MCU acts as a proxy between the computer and the SE.  So when the SE verifies the MCU’s firmware, it asks for a non secure MCU to send over it’s contents.  The problem is, if the MCU is compromised, nothing prevents it from sending over legitimate code to the SE, while running malicious code.  There are ways to make it difficult, but not to prevent it.  In addition, the use of a Secure Element requires closed source firmware, so you can’t even audit its code.

While this may sound very theoretical, Saleem proved it.  He was able to install malicious firmware on the MCU and still have the SE authenticate the device, and ultimately demonstrated a supply chain attack and an evil maid attack.  Despite what Ledger stated, neither required malware on the target’s computer, or for the target to confirm a special transaction. Ledger unfortunately did not surprise us this time – despite many methods of communication and waiting months, Ledger continued to downplay the seriousness of this critical vulnerability, and the researcher had to release it publicly themselves without a bounty awarded [43].

A few months later, Saleem set his eyes back on the BitBox.  Like Ledger, BitBox uses a dual architecture approach.  However, BitBox uses a general MCU and a tamper resistant storage chip (ATAES132A), which is only used for storing keys and passwords, not for running Bitcoin specific code.  However, while that is what BitBox said they did, what their code actually did was entirely different.  The ATAES132A surprisingly had all of its security protections disabled.  While all secrets were encrypted, the encryption key was placed in the non-secure MCU, and the encrypted contents were written to the disabled secure chip.  So – how to access the encryption key?  Well, it turns out factory resets wiped the secrets, but not the encryption key.  This led to reuse of encryption keys across users.  By attaching invasive probes, the attacker can backup the victims encrypted private keys, reset the device with their own password, write the victim’s encrypted private keys back, and successfully operate the device.  BitBox’s first patch for this was vulnerable to a MITM attack (it asked the secure chip for random bytes, not the MCU).  Saleem never bothered to review their second fix, as it was just a minor modification that looked poorly implemented, he had been poorly paid for his findings, and there were “elementary flaws in the high-level design of the device” [22].

In August of 2018, Trezor had to push an additional fix for it’s first supply chain attack six months prior (the write protection flaws).  Turns out, the previous update could be circumvented via clever use of the system configuration (SYSCFG) registers [44].

In December of 2018, Dmitry Nedospasov, Josh Datko, and Thomas Roth presented at 35C3 a number of vulnerabilities in hardware wallets at a presentation known as Wallet.Fail.  One of these was a physical theft vulnerability in Trezor wallets.  They disclosed that it was possible to downgrade the security of the MCU by downgrading the read protection from RDP2 (no access) to RDP1 (ability to read RAM) through a glitching attack.  But, the seed is stored in the flash, not RAM.  However, during an upgrade (which does not require the PIN), the device retains the seed by momentarily copying it to RAM.  So, all an attacker has to do is enter the bootloader, start a firmware upgrade, have the seed copied into RAM, stop the upgrade process, and glitch the Trezor to downgrade the MCU from RDP2→RDP1, which makes the RAM readable and the seed accessible [45].

The Wallet.Fail presenters also disclosed a supply chain vulnerability with Ledger’s devices.  Instead of verifying the firmware on each boot, the device wrote a constant to a specific memory address after verification, and on subsequent boots, merely checked for the constant.  There was a prevention to merely writing in the constant itself; however, the chip had an interesting memory map that allowed mapping the flash to another virtual memory area that would map back to the same physical memory.  This allows the attacker to successfully install their own malicious firmware and write in the constant to pass the verification.  What was Ledger’s response? “Don’t worry, your crypto assets are still secure on your Ledger device”, and even went as far to state that installing custom firmware “is actually a feature” (although later stating that “This bug has been solved in the next firmware version”) [46].  Ledger makes the further argument that the Wallet.Fail team only were able to compromise the MCU, not the Secure Element, which is what holds the private keys and authenticates the device.  However, the MCU is responsible for displaying data to the users via the screen, and receiving data via the confirmation buttons.  The Wallet.Fail team also made the same argument that Saleem did (and proved) earlier that year – if the MCU is compromised and acts as the proxy between the computer and the SE, it should be very easy to lie to the SE, and therefore fail the entire security model.  “We did not bother to fully reverse engineer [this] because we didn’t need to” [47].  They released a proof of concept that runs once then replaces itself with the genuine firmware, so that future checks come back clean.

In the beginning of 2019, an extension of the Wallet.Fail Trezor attack was revealed by Colin O’Flynn.  The attack used electromagnetic fault injection to leak secret information by USB descriptors.  Rather than using the JTAG connection and dumping from RAM, Colin used the USB connection (thereby also avoiding evidence of tampering) and dumped directly from flash (where the seed is stored) [48].

Also early in 2019, it was disclosed that Coldcard was vulnerable to a physical theft attack.  This was achieved by performing a MITM attack between the secure chip and the general purpose MCU.  This allowed the attacker to bypass the MCU failed attempt counter and brute force the PIN, as the PIN attempt counter located on the secure chip was not being authenticated [49].

A couple months later, Trezor was found to be vulnerable of a PIN attack as well.  The issue was that while the PIN was checked in constant-time, it was checked in sequence (processed one by one).  Through a side channel attack, an attacker was able to deduce the valid PIN by reading the power consumption when the device compared the presented PIN with the valid PIN [50].

In April of 2019, Christian Reitter discovered a side channel attack that all major HWW’s were vulnerable to (Trezor, Ledger, Coldcard, and BitBox) [51].  This attack involved the OLED displays on the devices.  They display information one pixel row at a time and require a lot of energy to do so, which opens up the devices to a side channel attack.  Christian found there was a direct correlation between the number of illuminated pixels on each row and the total power consumption of the device.  Through statistical analysis, the seed words and/or PIN combination could be discovered. Since the attack has to occur while the device is displaying the sensitive secrets, it would likely occur via a malicious USB cable – through a supply chain attack or an evil maid attack.

In July, the Ledger Donjon team disclosed a devastating physical attack against all Trezor devices and clones.  The vulnerability allows an attacker with physical access to the device to retrieve the master seed, using only cheap tools that can be bought from any electronics store and basic electronics techniques.  The total cost of the attack is only $100 and 5 minutes of time to execute.  This seems to be a fundamental bug in the STM32F205 chip, and is therefore unable to be patched [52].   Using a long passphrase could protect you from this, since it encrypts the seed.  However, this introduces multiple other issues through it’s use of BIP39 (which Bitcoin Core doesn’t use for these reasons).  BIP39 uses PBKDF2, which is a weak key derivation function with an iteration count that is set too low. There is also multiple implementations available, with versioning issues.  In addition, you now have to store multiple pieces of sensitive data (the seed, PIN, and passphrase).

Disclosed this year was a supply chain attack on Coldcard devices.  Coldcard allows users to load their own firmware.  This is by design, and documented on their website "We have so much internal protection for the master secret, that we feel it's safe to allow potentially hostile firmware..."  How Coldcard protects against this, is that firmware not signed by Coldcard will light up with a red light and "Danger" warning screen.  However, this only happens the first time it is run with the firmware.  This would allow an attacker to flash malicious firmware, load the device, power off, and then hand it off to a user as a fresh device - since the warnings would only be presented on the first run of the firmware.  Coldcard did think of this, and their defense against this was the prevention of any device reset.  The user in this case would receive a device that suspiciously already had a wallet set up, since the firmware would already have been loaded.  However, this protection was not done properly.  It was discovered that if malicious firmware is loaded on the device and then sets the PIN to zero, the device resets back to a blank device. On the next start-up, the Coldcard will then walk the user through the normal setup procedure [53].  This exposes all Coldcard users to a serious supply chain attack.

Coldcard does not acknowledge this vulnerability; instead claiming that this is part of their threat model.  This is amusing, since they did have to remove the documentation stating that "there is no way to clear main PIN" [54].  In their own blog post on the vulnerability, Coldcard stated that "If the plan is to make a trojan-horse device, an attacker would have to successfully open and then close the bag without damaging it while hoping the user doesn’t upgrade their firmware upon receipt" [55].  It is true that the supply chain security of Coldcard devices have now been reduced to that only of the plastic bag (which is essentially zero).  However, even when updating with genuine firmware, the currently installed malicious firmware could lie to you about the genuine firmware update.  That is not a reasonable ‘fix’.  It would require a patched bootloader for future devices, which has not occurred.


2.3  Inherent Architectural Issues

There are two main takeaways from all of these vulnerabilities discussed above – peer review and node security.  You cannot underestimate the importance of using the reference implementation (Bitcoin Core), and you ultimately need to trust the computer that your full node is on.  Hardware wallets will never, by design, fulfill these requirements.  It is inherent in their architecture.

Most, if not all, of the remote vulnerabilities discussed were strictly due to poor code review.  These simple yet devastating types of vulnerabilities (remote private key extraction, attacker controlled addresses, ransom attacks, etc), never have occurred in Bitcoin Core, and for as long as it remains the reference implementation, likely never will.  Bitcoin Core has a decentralized development process that is very conservative, prioritizes security, is supported by hundreds of developers, and has wide community and business review all over the world, with no single point of failure.  On the other hand, these hardware wallet projects have a handful of developers at most, with one single project lead who can merge commits with essentially no community review.  These projects can push poorly written code, or even malicious code, without it going noticed for months or even years at a time.  With Bitcoin Core, there is no individual developer a government or attacker can target.

What you have to understand about HWWs is that even when used with Core through something like HWI (Hardware Wallet Interface [56]), hardware wallets remove private key generation and transaction signing from all the peer review that Core gets.  And as going through all the examples previously shows, this is very dangerous.

Even the physical attacks discussed are made more difficult when used with Core, as Core has written libsecp256k1 [57], an optimized cryptographic library that is constant-time and constant-memory (therefore making side channel attacks much more expensive and complex), and is so well audited that the Core developers have actually discovered bugs in other libraries used by thousands of projects, like OpenSSL [58] and the GCC compiler [59].  Also, because Core is a software application that can run on any generic hardware, you don’t need to give away any personally identifiable information to set up a secure wallet.  For example, you can walk into a store, buy a laptop with cash, and download Core over Tor.  Unlike hardware wallets, which are bitcoin specific and purchased online, the risk of supply chain attacks with generic hardware are much less significant.

The single selling point of hardware wallets – that they enable you to securely sign transactions from a malicious computer – is a lie.  Anyone who knows anything about Bitcoin or cybersecurity in general can tell you this.  This is because Bitcoin’s security model fails without a full node, and you cannot trust a software application (the node) if you cannot trust the computing levels beneath it – that is, the operating system (OS), kernel, bootloader, BIOS, CPU, and hardware.  Just like Bitcoin, these need to be open source, well audited, and built with a security first mindset.  If one level is vulnerable or malicious, it corrupts all levels above it.

The vulnerability that best demonstrates why a secure full node/computer is needed, is the segwit vulnerability that was disclosed by Saleem in March of 2020  [60].  Think of this type of attack as a MITM attack, with the software wallet/computer acting as a malicious proxy between the hardware wallet and the bitcoin network.  In this hypothetical, a user has two UTXOs of 15btc and 20btc.  The user wants to create a transaction spending 20btc.  Since the (malicious) software wallet/computer creates the transaction and broadcasts it to the network, while the hardware wallet merely confirms/signs it, this attack is easy to carry out.  The malware on the computer will create a transaction consuming the 15btc UTXO completely, and fake 5btc from the 20btc.  This will look like a 20btc spend from the user’s point of view.  After the HWW signs the transaction, the computer will then return an error like “Woops, Broadcast Error, Please Try Again”. (This does sometimes happen for various reasons – so it wouldn’t be too suspicious).  Now, when asking the user to sign again, the malware will this time have the user sign the 20btc UTXO.  It will look like, from the user’s POV, the same transaction (since it is the same amount).  The malware now combines the signed 15btc from the first transaction and the signed 20btc from the second transaction, and sends the entire 35btc from the user’s account.  The 20btc will go to the user’s intended recipient, and the 15btc can be ransomed from the user, or profit shared with a miner.  The fix for this was very controversial, as some wallets did not have the data necessary to stop this one specific vulnerability, which locked up users’ funds, and led to incompatibilities across wallets.

However, one needs to think broader.  This is actually an issue with using a full node on an insecure machine.  There ultimately is no fix for this. (Except not using a hardware wallet, and using a full node in a sane manner).  Here is an example of another attack that demonstrates this:  A simple, what I call ‘twice-spend’ attack (since double-spends mean something entirely else in Bitcoin) [61].  If an attacker controlling your node sends you a transaction to sign, and you sign and broadcast it, they can simply tell you it has failed, and even show that to you in your UI.  They can then send you another (different) transaction for the same amount for you to sign, essentially stealing bitcoin from you.  This is a very simplified version of Saleem’s attack, yet with absolutely no fix for people who can’t trust their computer (which, remember, is the only selling point of HWWs).  If you can’t trust your computer, you ultimately cannot know when a broadcast fails, when a transaction sends, or if bitcoin you have received are real or not.

Ultimately, supporters and makers of hardware wallets do admit to this point when pressed on the matter.  NVK from Coldcard stated, "We can't prevent you from signing something that you want to sign...if you are of that level of concern, you can have a separate laptop that isn't connected to the can have could have a segregated laptop, which you should by the way." [62].  

You should be suspicious of companies that continue to lie to their consumers – refuse to accept vulnerabilities, reward researchers, or be honest with the public.  Besides the numerous hacks that have occurred, another area in which false security occurs is with HWWs ‘airgaps’.  Trezor and Ledger use a USB cable, the BitBox plugs directly into your computer using the USB connector, and the Coldcard uses an SD card.  These are not airgaps, but rather direct lines for transferring unlimited, unverifiable data.  This is incredibly dangerous, especially when the HWW advertises as being able to connect to a malicious machine.  Both USBs and SD cards contain proprietary firmware, many attack vectors, and complicated protocols [63], [64].  The device has to parse the provided information by loading drivers, parsing partition tables, and mounting/parsing filesystems – all of which is performing a ton of untrusted input processing.  There are two requirements to an airgap – you need the information to be verifiable, and you need the method of transfer to be as constrained as possible in regards to bandwidth.  You want as little data as possible to be able to transfer between the computers, because you want an attacker to have an incredibly difficult time getting effective malware across the gap.  With bitcoin and QR codes, we only need to, and only are able to, transfer a few hundred bytes at a time.  This means that an attacker can only squeeze in a couple bytes of malicious code on top of the data being transferred.  In addition to this, QR codes are open source, and their contents are verifiable by the end user (they can be scanned on another device first, before scanning on the airgapped machine, while confirming visually that they have not changed).  With USB and SD cards, gigabytes of data can be transferred.  To keep this in perspective, malicious code is usually measured in kilobytes. (1 GB = 1,000,000 kb = 1,000,000,000 bytes). So for hardware wallets, attackers have plenty of space, and there is absolutely no way for an end user to verify the transfer of data, as there is all sorts of ways for malicious data to hide.

Note:  For Yeti, we have had to change from QR codes to CD-r (read only CD’s).  These are just a dumb/blank medium with no filesystem or firmware.  Because Bitcoin Core does not support reading QR codes yet [65], it would require us writing much of the code, which would introduce an attack vector for our users.  Our analysis was that CD’s, while larger in size than QR codes and not verifiable, are still far superior to USB/SD cards due to the lack of complex firmware and filesystem.  It remains our goal to get a QR code standard accepted and merged into Bitcoin Core.  There is research into a possible standard ongoing [66].

Let's pretend for a moment that HWWs actually were properly airgapped.  And let’s disregard the numerous vulnerabilities discussed.  Even if this were the case, there is still a significant hack that can occur and could leak private keys.  This is called a chosen or biased nonce attack.  Without getting too in depth into the math/cryptography, when signing bitcoin transactions, besides using your random private key, a pseudorandom nonce is required, which is generated from the message that is being signed and the private key.  It is pseudorandom in the sense that while deterministic, the output looks random from any observer’s point of view.  However, if the nonce is ever reused, it's implementation is biased in any way, an attacker is able to gain information about the data, or if an attacker is able to choose the nonce, the signatures that are broadcast on the blockchain would leak information about the user's private keys, which could then be reconstructed by the attacker. This would lead to complete fund loss. [67].

The problem with HWWs is that there is no way to know that the nonce is not biased.  You have to trust the developers, vendors, and hope there hasn't been a supply chain attack.  There have been some proposed solutions, none implemented, that could fix this (such as sign to contract commitments) [68].

Let’s continue to make assumptions, for the benefit of hardware wallet vendors.  All of the vulnerabilities, remote and physical, have been fixed.  No more will occur.  You are able to purchase and receive the device (somehow) without giving away personal information.  Somehow, they gain about a hundred developers, a decentralized development process, a strong community (that took Bitcoin 10 years to build), and better cryptographic libraries (that can run on restricted MCUs). They switch to QR codes. They instruct users to only use the device with a segregated, Bitcoin-only computer as a full node.  (At this point, I’m not sure what benefits the HWW has for the user).  Even if this all were true, as bitcoin-specific hardware, hardware wallets would still represent serious risks of supply chain attacks, which ultimately are impossible to fix.

As an example of the risks of purchasing application specific hardware, let’s look to Bitmain, the largest producer of Bitcoin mining hardware.  They were able to ship a backdoor in their firmware, which was discovered in April of 2017 [69].  If activated, it would have allowed Bitmain to link specific machines to customers, and remotely shut down up to half of Bitcoin's hash power at the time.

Also in April of 2017, Greg Maxwell reverse engineered Bitmain ASICs and revealed that it's chip contained an undocumented and undisclosed circuit design that enabled the covert ASICBOOST attack against Bitcoin's proof of work.  This interfered with the protocol, blocking the segregated witness update, and gave Bitmain an estimated $100 million per year profit gain, as well as a centralizing effect [70], [71].  Both of these went unnoticed for significant periods of time, even though they produced a massive percentage of the bitcoin hashing power hardware.  You don't think your favorite bitcoin hardware vendors could do the same?

Vendors currently try to fix this with (laughable) attempts like tamper evident seals.  These, however, don't remove the trust in the vendors themselves, and are removable using everyday items like hair dryers.  In addition, there are already counterfeit seals and devices online, and different versions of seals on vendors websites, which makes verifying confusing (and impossible IMO) for users [47]. Another attempt at supply chain security is to use secure elements.  However, as discussed extensively in the vulnerabilities section, secure elements can be bypassed by malicious firmware. Furthermore, SE’s come with quite the downside, which is that they are closed source.  Verifying the operations of the device is usually illegal (without signing an NDA and paying a lot of money) and quite difficult.  Even without malicious firmware or Secure Elements, software and firmware is ultimately dependent on the hardware it is built on.  As Saleem warns, “I cannot repeat this enough: if you do not verify the physical hardware, it is game over.” [43].  It is incredibly easy for an attacker or the vendors themselves to add a malicious chip which would compromise the security of the entire device.  As hardware wallets are bitcoin-specific devices, they are the perfect target for attackers.

Besides, the firmware comes pre-installed on all of the devices.  That means that even if you are one of the few individuals auditing the HWW’s codebases, you have to trust that the code you have audited is the code that is actually on the device.  In order to prove this without any trust involved, there must be a process called ‘deterministic builds’ [72].  This allows the users to compile the code, hash it, and check that the output matches the one that the developers built.  However, Trezor and BitBox are the only vendors that allow for this process, and it is only secure if other community members participate.  Trezor continues to have issues with their build process [73], and BitBox has essentially no participants [74].  In addition, none of the HWW vendor’s offer bootstrappable builds, which protect against malicious compilers/toolchains (Bitcoin Core does) [75].

In conclusion, hardware wallets should be treated as a serious attack on the Bitcoin ecosystem as a whole.  There is no way to know whether or not there is currently a system wide attack on Bitcoin users occurring, just waiting to execute until a certain threshold of users and funds are vulnerable.  This is not paranoia, and I am not the only person talking about this.  Gregory Maxwell is one of the most knowledgeable Bitcoin developers around, and he recently wrote a scathing review of the industry [76]:

I don't think very highly of hardware wallets. They're opaque, largely unauditable. Most are crapped up with sketchy altcoin support that forces them into objectively less secure cryptographic code and makes them harder to review. They're an extremely attractive target for supply chain attacks….The badness of the supply chain vulnerability is so severe that I just cannot recommend a hardware wallet except for casual low/moderate value use where it doesn't really matter what security properties you use...

For the moment the situation isn't quite dire because the thieves are busy with low hanging fruit, and haven't started e.g. flooding ebay/amazon with nearly indistinguishable backdoored clones. Yet. (or maybe they have, and Jan 1st, everyone with one is going to have their funds taken all at the same time. :( )...

There is no perfect solution but "just use a hardware wallet" has an astronomical vulnerability to counterfeit goods/supply-chain interception-- one that is potentially large enough to be a systemic risk to the [whole] ecosystem...

What happens when someone sinks a million dollars into setting up clone manufacturing lines for several popular hardware wallets, and saturates distributors, amazon comingling/etc. with nearly indistinguishable fakes? I believe the only reason that we're not yet seeing that at scale is because for the moment you can compromise hardware wallets by adding a slip of paper to the box "We've selected a random 12 word see for you, keep it safe"


Peter Todd is another security expert and long time, leading contributor to Bitcoin Core. When Peter was asked how he would steal from Trezor or Ledger users if he were a developer working for them, his response was “I hate to say this, but this would be a very easy thing. All I would be worried about is getting caught. If my goal is purely to go steal the Bitcoins, you’d just go push your software update that backdoors the random number generator for instance, or backdoors the signing algorithms so it creates broken signatures...I think it’s easy to do and probably easy to get away with it too” [77].

3.0  Arguments Made in Defense of Hardware Wallets:

In this section, I will discuss why common arguments made in defense of hardware wallets are invalid.  One of the most common comments I hear, is that using a multisig setup with varying hardware vendor devices would solve all of the vulnerabilities discussed, as you would not be dependent on any single firmware, hardware, or developer.  This is just factually incorrect – using multiple insecure devices does not magically create a secure scheme.  If you built your house out of cheap windows, cheap insulation, and a cheap foundation, it is by definition a poorly made home.  While a HWW multisig setup would be more secure than a single sig HWW, it still does not represent a sane or secure private key management policy. (Scoring a 30% on an exam may be better than scoring a 20%, but both are F’s).  There are so many long term and inherent issues/vulnerabilities with hardware wallets, that finding overlapping vulnerabilities or attack vectors across devices is an incredibly easy task.

Even if this weren’t the case, hardware wallets have such poor code review, that they can’t even properly implement multisignature wallets.  Benma, a hardware wallet developer with BitBox, recently wrote a post titled “How nearly all personal hardware wallet multisig setups are insecure”, claiming that “if you use hardware wallets in a Bitcoin multisig are likely to be exposed to remote theft or ransom attacks...Multisig using multiple hardware wallets is often used as a security upgrade for personal funds previously held in a single-signature wallet. In reality, it often achieves the opposite when it comes to remote attacks. ” [78].  

Hardware wallets also implement their own ‘standards’, that have no actual acceptance in the wider community [79], and sometimes don’t even have any documentation [80].  This is incredibly dangerous, as this circumvents the consensus process that standards should go through.  In fact, this situation has gotten so bad, that websites have appeared that have had to document which HWW’s implement which ‘standards’, in what ways, with what documentation [81].  Having just your seed phrase is not enough for recovery of your funds.

Another common defense of hardware wallets I hear is that the (numerous) physical attacks aren’t worth considering, because the device’s passphrase, or ‘25th word’ prevents the attacks.

There are a number of issues with the so called ‘25th word’.  One, is that for Trezor devices, the passphrase is entered on the computer and not confirmed on the device, which would allow an attacker to MITM this data and send a separate password to the device, which would then receive coins on a separate wallet that the attacker could then ransom [82].  Humans are also notoriously bad at creating strong passwords, especially if it has to be entered on a small device with only two buttons, rather than typed on an actual keyboard.  This also creates a scenario where multiple sensitive pieces of information now have to be saved (the seed, PIN, and passphrase), increasing complexity greatly.  Memorizing passwords is also absolutely not recommended. It is for these very reasons that Bitcoin Core has always defaulted to not encrypting the user’s private keys.  David Harding, the co-author of the Bitcoin Optech newsletters, writes “there's an open question between experts about whether or not the use of wallet encryption in typical user wallets saves more money than it loses...Some experts believe the number of occasions where a wallet file has fallen into a attacker's hands without that attacker getting direct access to the user's computer is small compared to the large number of occasions where a user has forgotten their passphrase, so it's on average safer not to use encryption.”  Wladimir J. van der Laan, the maintainer of Bitcoin Core, supported the same message by writing, “I've received way more sob stories about people losing their wallet passphrase than about stolen funds”, and declared that he is “partial to not encrypting by default”, since encryption only really protects funds when “people have physical access to my PC but won't use it to install a keylogger/backdoor" [83].

The final defense of hardware wallets that I often hear is that they have a reduced attack surface.  While it is true that HWWs are not a fully fledged computer, running the nearly 30 million lines of code that is the Linux Kernel, with a complete GRUB bootloader, coreboot BIOS, and all peripherals (including firmware) – this is a very elementary way of viewing attack vectors.  Threat analysis is so much more than just counting lines of code.  The in depth analysis I just performed in the previous sections clearly demonstrates this.  However, this does bring up some good points, like the fact that the Linux Kernel is monolithic and doesn’t have great hardening [84].  Yeti (the solution I propose in the next section) solves this not by reverting to poorly designed hardware wallets, but by recommending a combination of secure operating systems with certain special attributes, and by running them in a multisignature offline setup.


4.0  Solutions

We have talked about all the issues, so now what is the solution?  I have demonstrated here that without a doubt, hardware wallets are not it.  Our requirements are running the reference implementation (Bitcoin Core) in an offline, multisignature setup, with a true airgap and minimal software dependencies.  The hardware used needs to be wiped clean, running an open source operating system, and be single-purpose (used only for storing/sending/receiving your Bitcoin).  So how do we achieve this?  We achieve this through Yeti [2], which is a protocol that enables users to secure generic hardware, install Linux, Bitcoin Core, and set up airgapped multisignature wallets – all using the reference implementation.  

As discussed above, in scenarios where the greatest security is needed, we advise users to install certain different operating systems.  These are Qubes [85] (using the AppVM’s Whonix Workstation and Whonix Gateway), GNU Guix [86], and Debian [87].  Qubes minimizes the trusted computing base, helps with the bloated kernel issues, and isolates peripherals, while Whonix [88] enables the best hardening of any distro and makes DNS leaks impossible (even malware with root privileges can’t discover the true IP address – as long as there’s no exploit against Xen or Tor).  GNU Guix is an OS that is working on reducing the ‘trusting trust’ attack [89] by minimizing the size of the binary seed bootstrap, and working towards a universal, full source bootstrap [90].  Finally, Debian is a major linux distribution that can run on the PowerISA, the only open source instruction set architecture with an open source firmware/schematics implementation available on the market.

More documentation will be on its way in reference to this protocol (how to secure the coordinating/broadcasting device, set up a secure signing location, in depth guides and tutorials for downloading, verifying and installing software, etc).  A nice way to visualize the setup, is that it in regards to protocol, it is essentially the same system as a multisignature hardware wallet private key management scheme, except replace the hardware wallets with offline laptops running Bitcoin Core.  Is this even possible? Yes.  You can run Bitcoin Core on a computer/laptop that has network access disabled.  You then use PSBTs (BIP174) to pass partially signed transactions around over the air gap (QR codes), until passing back to the online device (full node; Bitcoin Core on a networked device) to broadcast.  This isn't possible in Core yet (especially in the GUI), so Yeti is used as a thin layer on top, with the goal that Core is one day all that is needed.  As Core gets updated, the amount of code Yeti has is decreased.

This may sound like overkill for some scenarios, and I both understand and agree with that.  What I have described here, is our most secure level of account.  This is for one of our most extreme of scenarios, with the largest of funds.  Just like with traditional fiat money, you most likely have a wallet that you keep small amounts of cash in, a spending account, a savings account, and a safety deposit box.  With Bitcoin, you are going to want to have the same partitioning with different levels of security.  You are not going to want to have all of your funds in the most secure setup, because of usability concerns.  It is also a security risk, because physically accessing your signing devices so frequently will be more likely to give away the locations if you are under surveillance.  So you want to limit that process as well.

At Yeti, we have different Levels of sovereign custody schemes that balance security and usability in order to recreate those types accounts that our users are used to.  In my upcoming posts, I will be describing more in depth those options, the differences between them, as well as the theory behind them.


5.0  Advantages and Disadvantages of the Solution

The strength of this solution, is obviously security.  However, there are weaknesses, which should be discussed.  The two major ones are space and time.  Laptops take up much more physical space than hardware wallets do.  Typically in multisignature protocols, it is suggested for the cosigners to be spread geographically.  This traditionally involves giving the cosigners the hardware wallet.  It is more difficult to give a cosigner an entire laptop, than a small hardware wallet.  This could be solved by giving the cosigners only a backup of the cosigner keys (Yeti uses descriptors and Core’s WIF format, written in the NATO alphabet with a checksum).  However, you then would be required to perform a full re-download/installation of the software and restore the wallet anytime you wanted to send bitcoin.  (This is not necessarily recommended, but supported – especially if you don’t want the cosigners to know what you are giving them.  This is made less-worse by the fact that such a secure setup should only be spent from once or twice a year.).

The other downside is time.  Hardware wallets are easy. The UI is nice, and set up is fast.  But as proven, they’re not secure.  Yeti will take longer to setup.  But time does not have to mean confusion.  We work really hard on user experience.  And that is because we believe user experience is not only abstractly important, but important as a component of security.  Good usability means there is less of a chance that users will make a fatal error.  In addition to us, there is already an entire new Bitcoin Design community [91] that is creating a Bitcoin Design Guide and receiving new grants, to help develop Core’s user experience.

However, we recognize that we cannot turn people into security experts overnight, and we do not want to drive users to custodian options as a side effect of exposing the security theater of hardware wallets (as custodians also represent an existential threat to the Bitcoin ecosystem).  This is why we have worked hard on the user experience.  During installation, Yeti will explain every step of the way what is going on, and what you are to do.  As mentioned, I will be releasing more in depth posts like this one. And the Yeti team will be releasing videos explaining the process.  As Grubles from Blockstream said, “That hardware wallets are easier for noobs just means that we need to make the old laptop UX easier.” [92].

While a common complaint about this solution is expense, as there is a belief that purchasing laptops as offline signing devices has to be expensive – this is not true.  You can find laptops at Best Buy for $150-200.  This is not drastically different than most hardware wallets, for a drastically more secure system (it should be noted that hardware wallets cost about the same – the BitBox and Coldcard both cost around $120).

6.0  Conclusion

Bitcoin Core is the spec of Bitcoin.  Bitcoin Core currently is Bitcoin.  If you want to follow Bitcoin, and you want to run well reviewed code, you run Bitcoin Core.  Bitcoin Core has more developers, more peer review, more fuzzing, and more research, than any other wallet. It is not even close. That security gain is far undervalued.  In comparison, all Hardware Wallets have been vulnerable to remote attacks, all as a result of lack of peer review.  All are vulnerable to supply chain attacks.  All have inherent architectural vulnerabilities.

Besides Yeti, there currently is no other secure, offline, airgapped, HD multisig wallet built on top of Bitcoin Core with minimal software dependencies.  Every dependency is a potential attack surface.  Outside of Core, you should only use battle-tested software.  It is just inappropriate to recommend hardware wallets when you cannot confirm if the device is genuine, they run essentially non-peer reviewed code, falsely advertise the ability to work with malicious nodes, have no ability to properly setup multisignature wallets, and have a long history of life-ending bitcoin-stealing vulnerabilities that are often hand-waived away as insignificant.


[1]    A. Moxin, “Yeti Cold and Bitcoin Core With JW Weatherman, Will and Robert Spigler.”

[2]    “Yeti Cold.”

[3]    Sjors, “Coordinate multi-sig wallet · Issue #18142 · bitcoin/bitcoin,” GitHub, Feb. 13, 2020.

[4]    fanquake, “offline / multisig UX · Issue #56 · bitcoin-core/gui · GitHub,” GitHub, Aug. 14, 2020.

[5]    sipa, “Basic Miniscript support in output descriptors by sipa · Pull Request #16800 · bitcoin/bitcoin,” GitHub, Sep. 03, 2019.

[6]    R. Spigler, “Port Qubes to ppc64 [2 bitcoin bounty] · Issue #4318 · QubesOS/qubes-issues,” GitHub, Sep. 17, 2018.

[7]    J. Lopp, “A Modest Privacy Protection Proposal,” Cypherpunk Cogitations, Sep. 29, 2018.

[8]    J. Lopp, jlopp/physical-bitcoin-attacks. 2020.

[9]    A. van Wirdum, “The Long Road to SegWit: How Bitcoin’s Biggest Protocol Upgrade Became Reality,” Bitcoin Magazine, Aug. 23, 2017.

[10]    A. van Wirdum, “NO2X: Breaking Bitcoin Shows No Love for the SegWit2x Hard Fork in Paris,” Bitcoin Magazine, Sep. 12, 2017.

[11]    A. van Wirdum, “Now the SegWit2x Hard Fork Has Really Failed to Activate,” Bitcoin Magazine, Nov. 17, 2017.

[12]    “SIM swap scam,” Wikipedia. [Online]. Available:

[13]    S. Coonce, “The Most Expensive Lesson Of My Life: Details of SIM port,” Medium, May 20, 2019.

[14]    6102, “With domains like this, how the hell are users expected to get this right?,” @6102bitcoin, Dec. 14, 2020. (accessed Dec. 28, 2020).

[15]    Andreas M. Antonopoulos, “Don’t overreact to the phishing scams that target hardware wallet buyers Hardware wallets are some of the best mechanisms we have to store crypto. Compromising a website database is not at all the same as compromising the security of the hardware wallet.,” @aantonop, Dec. 13, 2020.

[16]    “Trezor Hardware Wallet (Official).”

[17]    “Hardware Wallet - State-of-the-art security for crypto assets,” Ledger.

[18]    “BitBox hardware wallet by Shift Crypto,” ShiftCrypto.

[19]    “Coldcard Wallet – Hardware Wallet - The Most Trusted and Secure Hardware Wallet,” ColdCard.

[20]    prusnak, “enable stack protector · trezor/trezor-firmware@524f2a9,” GitHub, Jul. 31, 2014.

[21]    prusnak, “set multisig_fp_mismatch when non-multisig input is encountered · trezor/trezor-firmware@137a60c,” GitHub, Feb. 25, 2015.

[22]    S. Rashid, “Breaking into the (Digital) BitBox,” Saleem Rashid, Nov. 26, 2018.

[23]    C. Reitter, “Trezor One dry-run recovery vulnerability,” invd blog, Dec. 09, 2019.

[24]    S. Lappo, “How (not) to lose your life savings while paying for a coffee with your Ledger Hardware Wallet,” Sergey’s blog.

[25]    B. Commons, “#SmartCustody,” Smart Custody, 2019.

[26]    “Casa | Secure Storage Solutions for Bitcoin.” (accessed Dec. 28, 2020).

[27]    L. Champine, “A Ransom Attack on Hardware Wallets,” Sia, Mar. 01, 2019.

[28]    S. Crypto, “BitBox Desktop App 4.5.0 with Firmware 6.0.2 Release,” Medium, Mar. 08, 2019.

[29]    S. Crypto, “BitBox Desktop App 4.6.0 with Firmware 6.0.3 Release,” Medium, Mar. 28, 2019.

[30]    TheCharlatan, “A ransom attack on Coldcard’s change and keypath verification – TheCharlatan – Reproducibility Matters,” TheCharlatan.

[31]    benma, “A theft attack on Trezor Model T,” Medium, Nov. 17, 2019.

[32]    dgpv, “coldcard-multisig-change-vuln.txt,” GitHub.

[33]    P. Rusnak, “Details of firmware updates for Trezor One (version 1.9.0) and Trezor Model T (version 2.3.0),” Medium, Apr. 17, 2020.

[34]    Monokh, “Ledger App Isolation Bypass,” Monokh, Aug. 04, 2020.

[35]    benma, “Coldcard isolation bypass,” benma’s blog, Nov. 24, 2020.

[36]    “Coldcard/firmware,” GitHub.

[37]    “Testnet Considered Useful,” Coinkite.

[38]    “Ten Immutable Laws Of Security (Version 2.0),” Microsoft, Jun. 16, 2011.

[39]    J. Hoenicke, “Extracting the Private Key from a TREZOR.”

[40]    SatoshiLabs, “Fixing physical memory access issue in TREZOR,” Trezor, Aug. 18, 2017.

[41]    SatoshiLabs, “TREZOR One: Firmware Update 1.6.1,” Trezor, Mar. 21, 2018.

[42]    benma, “bootloader: disallow firmware downgrades · digitalbitbox/mcu@350c7a8,” GitHub, Mar. 05, 2018.

[43]    S. Rashid, “Breaking the Ledger Security Model,” Saleem Rashid, Mar. 20, 2018.

[44]    prusnak, “setup: disable SYSCFG registers · trezor/trezor-firmware@fdd5cbe,” GitHub, Aug. 27, 2018.

[45]    SatoshiLabs, “Details of Security Updates for Trezor One (Firmware 1.8.0) and Trezor Model T (Firmware 2.1.0),” Trezor, Mar. 06, 2019.

[46]    “Still Got Your Crypto: In Response to’s Presentation,” Ledger, Dec. 28, 2018.

[47] - 2018. 25:15; 7:00

[48]    C. O’Flynn, “Glitching Trezor using EMFI Through The Enclosure,” Colin O’Flynn.

[49]    L. Ninja, “Hardware Wallet Review: COLDCARD Wallet - Short PIN brute-force attack,” Crypto Lazy Ninja, Mar. 15, 2019.

[50]    V. Servant, M. San Pedro, and C. Guillemet, “Breaking Trezor One with Side Channel Attacks,” Ledger Donjon, Jun. 17, 2019.

[51]    C. Reitter, “OLED Side Channel - Summary October 2019,” invd blog, Oct. 29, 2019.

[52]    K. Abdellatif, C. Guillemet, and H. Olivier, “Unfixable Seed Extraction on Trezor - A practical and reliable attack,” Ledger Donjon, Jul. 01, 2019.

[53]    TheCharlatan, “A practical supply chain attack on the Coldcard,” TheCharlatan.

[54]    peter-conalgo, “Link to blog · Coldcard/firmware@e1fb05d,” GitHub, May 13, 2020.

[55]    “Supply Chain Trust Minimized,” Coinkite, Mar. 02, 2020.

[56]    bitcoin-core/HWI. Bitcoin Core.

[57]    bitcoin-core/secp256k1. Bitcoin Core.

[58]    sthz, “sthz comments on Bitcoin core code was tested so thoroughly that devs uncovered a bug in OpenSSL (used in 35% of all websites). Repost,” Reddit.

[59]    sipa, “memcmp with constants that contain zero bytes are broken in GCC,” GitHub, Sep. 23, 2020.

[60]    P. Rusnak, “Details of firmware updates for Trezor One (version 1.9.1) and Trezor Model T (version 2.3.1),” Medium, Jun. 03, 2020.

[61]    “Irreversible Transactions - Bitcoin Wiki,” Bitcoin Wiki.

[62]    NVK Rodolfo Rebuttal to JWWeatherman on Coldard Coinkite Security of Hardware. 2020.

[63]    “On Hacking MicroSD Cards,” bunnie:studios.

[64]    C. Cimpanu, “Here’s a List of 29 Different Types of USB Attacks,” BleepingComputer, Mar. 13, 2018.

[65]    luke-jr, “QR Code scanner · Issue #9913,” GitHub, Mar. 03, 2017.

[66]    W. McNally and C. Allen, “Uniform Resources (UR),” GitHub, Jul. 09, 2020.

[67]    S. Snigirev, “Hardware wallets can be hacked, but this is fine,” Medium, Jan. 05, 2019.

[68]    P. Wuille, “[bitcoin-dev] Overview of anti-covert-channel signing techniques,” Mar. 03, 2020.

[69]    A. van Wirdum, “Bitmain Can Remotely Shut Down Your Antminer (and Everyone Else’s),” Bitcoin Magazine, Apr. 26, 2017.

[70]    A. van Wirdum, “Breaking Down Bitcoin’s ‘AsicBoost Scandal,’” Bitcoin Magazine, Apr. 11, 2017.

[71]    W. WhalePanda, “ASICBoost, the reason why Bitmain blocked Segwit.,” Medium, Apr. 06, 2017.

[72]    “Reproducible builds,” Wikipedia. Dec. 11, 2020, [Online]. Available:

[73]    prusnak, “Fix deterministic build for Core release firmware · Issue #1170,” GitHub, Aug. 05, 2020.

[74]    “digitalbitbox/bitbox02-firmware,” GitHub.

[75]    Carl Dong, Bitcoin Build System Security | Carl Dong | Breaking Bitcoin 2019 Amsterdam. 2019.

[76]    non_fingo, “Opinion regarding security,” Reddit.

[77]    V. Costea, “S4 E7: Peter Todd on Hardware Wallets, Security & Proofmarshall,” Bitcoin Takeover, Feb. 04, 2020. 0:44:28

[78]    benma, “How nearly all personal hardware wallet multisig setups are insecure,” ShiftCrypto, Nov. 05, 2020.

[79]    “Comments:BIP 0039,” GitHub.

[80]    Christopher Allen, “Today I learned that there is no BIP or SLIP docs specifying how the m/48’ HD derivation works for bitcoin multisig. This was apparently agreed upon by @ElectrumWallet , @Ledger , @Trezor & Copay and now used by @COLDCARDwallet & others. But many important details missing!,” @ChristopherA, Apr. 21, 2020.

[81]    “Wallets Recovery.”

[82]    benma, “A ransom attack on Trezor’s and KeepKey’s passphrase handling,” benma’s blog, Sep. 02, 2020.

[83]    Sjors, “Slight improve create wallet dialog,” GitHub, Sep. 18, 2020.

[84]    “Kernel Self Protection Project - Linux Kernel Security Subsystem.”

[85]    “Qubes OS: A reasonably secure operating system,” Qubes OS.

[86]    “GNU’s advanced distro and transactional package manager — GNU Guix.”

[87]    “Debian -- The Universal Operating System.”

[88]    Whonix, “Whonix TM - Software That Can Anonymize Everything You Do Online.” (accessed Dec. 28, 2020).

[89]    K. Thompson, “Reflections on Trusting Trust,” Commun. ACM, vol. Volume 27, p. 3, Aug. 1984.

[90]    “Bootstrappable builds.”

[91]    “Join Bitcoin Design on Slack,” Slack. (accessed Dec. 28, 2020).

[92]    grubles, “That hardware wallets are easier for noobs just means that we need to make the old laptop UX easier.,” @notgrubles, Dec. 03, 2020. (accessed Dec. 28, 2020).

[93]    TheCharlatan, “List of Hardware Wallet Hacks.”


©2020 by Robert Spigler. Proudly created with