# Introduction

Bitcoin, and other blockchains, such as Blockstream’s Liquid, depend on a digital signature algorithm called ECDSA. Verifying these signatures constitutes the vast majority of the computational work done by Bitcoin full nodes, and the creation of signatures (and derivation of signing keys using BIP32) constitute the vast majority of the work done by wallets when creating Bitcoin transactions.

Since Bitcoin transactions often require many signatures, and the network processes many millions of such transactions, the performance of these algorithms is critical. However, there is a tension between the goals of high performance and *constant timeness*, which means that the algorithms take the same amount of time, no matter the details of the secret keys that they manipulate.

Core to the signing and key-derivation algorithms is a mathematical computation called a *modular inverse*. There are two well-known ways in the literature to compute modular inverses: using a technique called an *exponentiation ladder,* which is very slow, or using Euclid’s extended GCD algorithm, which is fast but hard to make constant time. Currently, libsecp256k1 always uses an exponentiation ladder when configured for use within Bitcoin Core.

In “Fast constant-time gcd computation and modular inversion,” Daniel J. Bernstein and Bo-Yin Yang develop a new extended GCD algorithm. This algorithm, referred to as “safegcd,” was recently implemented for libsecp256k1 by Peter Dettman. The algorithm works by iterating a simple computation step until a termination condition is reached. The algorithm can be made constant-time by always iterating this step a fixed number of times, even if it is “done” after fewer steps, as long as we can be sure that our fixed number is enough for all inputs. This works because once the termination condition is reached, all values of interest remain unchanged by additional iterations.

While it is clear that the safegcd algorithm computes an extended GCD if it reaches the termination condition, determining how many iterations are needed for all inputs of a given size, 256-bit inputs in the case of secp256k1, is more challenging to verify.

Pieter Wuille devised a new method of computing a bound on the number of iterations needed for the safegcd algorithm to terminate by computing a series of convex hulls. This new method immediately reduced the bound on the number of iterations needed from 741 to 724. But what’s more, Wuille and Gregory Maxwell later discovered a variant of Bernstein’s safegcd algorithm that their computations showed terminates in at most 590 iterations! This would improve libsecp256k1’s ECDSA signing speed by 25% and signature verification speed by 15% over the exponentiation ladder based approach that’s currently used. The challenge then, was to prove these new bounds correct.

# Guaranteeing Termination

While it is exciting to have even faster safegcd variants, relying on a novel termination argument involves some risks. If there is an error in the program that does this convex hull analysis, then the computed bounds may be incorrect. If our code then relied on these incorrect bounds, the safegcd implementation would sometimes not reach its termination condition. If the termination condition isn’t reached, incorrect values would be returned by the modular inverse function, and the digital signature verification function relying on it would fail to operate correctly. A fault in such a fundamental cryptographic operation would be devastating for libsecp256k1 and the Bitcoin Core software that relies upon it. In fact, Bernstein has advised on Twitter to avoid using these bounds with the algorithm because there was (until now) no formally verified proof of these bounds being sufficient.

To guarantee that Wuille’s termination argument is correct, we implemented the convex hull algorithm in Coq. Coq is a software proof assistant based on dependent type theory. As such, it is simultaneously a mathematical deduction system and a functional programming language. This enables a unique kind of reasoning perfectly suited for this task: *proof by reflection*.

The proof by reflection method works by implementing a functional program within the proof assistant to perform some kind of analysis, in our case, using convex hulls to bound the number of iterations needed to reach the termination condition. Then we use the mathematical deduction system to prove a lemma stating that this program’s analysis is correct.

Putting these two pieces together lets us prove a theorem that states that the modified safegcd function terminates in 590 iterations. The proof assistant verifies this theorem by executing the functional program and deducing from the correctness lemma that this theorem holds. In our implementation, the execution of the analysis within the Coq proof assistant ran in only 2.5 days on commodity hardware.

# Summary

A new fast, constant-time extended GCD algorithm is under development for libsecp256k1. This algorithm would dramatically improve transaction signing performance (which is often done on very weak hardware) and verification (which must be done tens of millions of times by Bitcoin full nodes). However, verifying the correctness of this new algorithm was an open problem.

To verify the correctness of this novel algorithm, we created a formal proof of its correctness in the Coq proof assistant. We took advantage of Coq’s ability to perform proof by reflection to execute a convex hull computation within the proof assistant and verify that this computation is correct.

Those interested in verifying the proof themselves can find the Coq code in safegcd-bounds PR #7.

*Note: This blog was originally posted at https://medium.com/blockstream/a-formal-proof-of-safegcd-bounds-695e1735a348*