Cyclotomic fields

The nth cyclotomic field is the field \mathbb{Q}[\zeta] generated by a primitive nth root of unity, ζ. It is an example of a number field, consisting of algebraic numbers, and its dimension is φ(n) when regarded as a vector space over \mathbb{Q}. If n is odd, then the nth cyclotomic field is identical to the 2nth cyclotomic field, so we’ll henceforth assume without loss of generality that n is even.

The intersection of \mathbb{Q}[\zeta] with the ring of algebraic integers is exactly the ring \mathbb{Z}[\zeta] generated by ζ. The first few examples of rings of this form are:

  • n = 2: \zeta = -1, so \mathbb{Z}[\zeta] is just the ring of ordinary integers \mathbb{Z}.
  • n = 4: \zeta = i, so \mathbb{Z}[\zeta] is the ring of Gaussian integers (red, below-left).
  • n = 6: \mathbb{Z}[\zeta] is the ring of Eisenstein integers (blue, below-right).

These three rings admit unique factorisation, meaning that every nonzero element can be uniquely expressed as a product of primes, up to reordering and unit migration. This turns out to be true for precisely the following thirty values of n:

2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30,
32, 34, 36, 38, 40, 42, 44, 48, 50, 54, 60, 66, 70, 84, 90

The corresponding values of φ(n) are:

1, 2, 2, 4, 4, 4, 6, 8, 6, 8, 10, 8, 12, 12, 8,
16, 16, 12, 18, 16, 12, 20, 16, 20, 18, 16, 20, 24, 24, 24

Note that, in particular, the maximum dimension is φ(n) = 24, attained only by the three largest values of n (70, 84, and 90). These rings, viewed as additive groups, are therefore free Abelian groups of rank 24.

And this is where it gets exciting: David Madore noticed that the existence of order-70 and order-84 elements in the Conway group Co0 give rise to highly rotationally symmetric projections of the Leech lattice into the complex plane. Here’s the order-84 projection applied to the first shell:

Projection of the first shell of the Leech lattice into the plane with order-84 rotational symmetry, by David Madore

And here’s the order-70 counterpart:

Projection of the first shell of the Leech lattice into the plane with order-70 rotational symmetry, by David Madore

By suitably scaling and rotating either of these projections, we can arrange for the image to contain the respective nth roots of unity. If we view the resulting projection as a map from \mathbb{Q}^{24} to \mathbb{Q}[\zeta], we can see that it is surjective (because its image contains the roots of unity, and these span the field) and therefore bijective (because the dimensions match).

In particular, that means that the projection remains injective when restricted to the Leech lattice, and its image must be a rank-24 Abelian subgroup of \mathbb{Q}[\zeta] which is closed under multiplication by roots of unity. Given that it has a finite integral basis, we can further scale/rotate the projection such that the images are all algebraic integers and therefore form an ideal in \mathbb{Z}[\zeta].

Thanks to these rings having class number 1, every ideal is principal, so this ideal must simply be a scaled/rotated copy of \mathbb{Z}[\zeta] itself! That is to say, when n is 70 or 84, we can identify the points in the Leech lattice with the algebraic integer elements of the cyclotomic field!

Working in the opposite direction, we can construct the Leech lattice starting from one of these cyclotomic fields by defining an inner product on its algebraic integer elements:

\langle u, v \rangle = tr(\alpha u v^{\star})

where α is a particular element of the ring (different choices yield different lattices), and tr is the trace operation (sum of all Galois conjugates).

There are similar constructions of the Leech lattice for some other values of n, where the class number is greater than 1 (and therefore unique factorisation does not apply), but the constructions could potentially require using a proper ideal rather than the whole ring of algebraic integer elements. For instance, Craig discovered a construction for n = 78, and Eva Bayer-Fluckiger gave a more thorough exploration.

The cases of n = 70 and n = 84 are arguably the most interesting, because they endow the Leech lattice with a compatible ring structure where unique factorisation holds: the nonzero lattice points form a multiplicative monoid isomorphic to the direct sum of a cyclic group of order n (the group of roots of unity) and the free commutative monoid on countably many generators.

Of these, the n = 84 ring is my favourite: it contains the Gaussian, Eisenstein, and Kleinian integers as subrings, and indeed is the smallest cyclotomic ring to do so.

Posted in Uncategorized | Leave a comment

Urbit for mathematicians

The three months since the last cp4space article have largely been spent learning about an interesting project called Urbit. My attention was drawn to its existence during a talk by Riva-Melissa Tez (formerly of Intel) on the importance of continued improvements to the computational efficiency of hardware.

A decent amount has been written on the Internet about Urbit, but it tends to either be non-technical, explaining the importance of the project, or technical but written in highly unconventional project-specific jargon. Instead, we’ll depart from either of these by examining the novel internals of Urbit that are most likely to be of interest to mathematicians and theoretical computer scientists.

Before we can do so, however, it is worth asking what Urbit is. This is often a source of confusion, because it is an ecosystem of multiple interacting components designed to mesh well together. If you were to metaphorically run 2-means on the set of Urbit components, they would cleanly separate into the operating system (Urbit OS) and identity system (Urbit ID).

The most interesting salient components of Urbit OS are:

  • Arvo: a deterministic operating system kernel implemented in a pure functional programming language (Hoon).
  • Nock: a very low-level functional programming language. It’s roughly halfway between a minimalistic Lisp dialect and the SKI combinator calculus. In the Urbit ecosystem, Nock is the machine code for the virtual machine in which the Urbit OS runs.
  • Vere: a software implementation (written in C) of the Nock virtual machine, capable of running the VM inside a Unix environment.
  • Hoon: a higher-level functional programming language in which the Urbit OS is directly written. This ultimately compiles down to Nock.

The identity system, Urbit ID, is a hierarchical system of integer addresses (analogous to IPv4 addresses) called Azimuth points. Azimuth points are securely associated to public keys (either mutably or immutably), and communication between machines running Urbit is built upon this public key infrastructure.

Azimuth points and their hierarchy

As mentioned, Azimuth points are non-negative integers. There are five such levels:

  • Points in [0, 2^8 – 1] are called galaxies.
  • Points in [2^8, 2^16 – 1] are called stars.
  • Points in [2^16, 2^32 – 1] are called planets.
  • Points in [2^32, 2^64 – 1] are called moons.
  • Points in [2^64, 2^128 – 1] are called comets.

These can cumulatively be represented by the datatypes uint8, uint16, uint32, uint64, and uint128, respectively. These datatypes possess standard ‘restriction maps’ familiar from using a programming language such as C:

uint64 → uint32 → uint16 → uint8

where, for instance, a 32-bit integer is coerced into a 16-bit integer by taking the lower half (or, equivalently, reducing modulo 2^16). There are two elegant ways to impose a ring structure on each of these 2^n-bit datatypes:

  • Integers modulo 2^(2^n), which form a ring;
  • Nimbers less than 2^(2^n), which form a finite field.

The first choice is the familiar one (in the C programming language, the operators {+, *, -} coincide with modular arithmetic in Z/(2^2^n)Z). In the second choice, ring addition is bitwise XOR, and ring multiplication is slightly more complicated to define.

For each of these two choices, the aforementioned restriction maps are ring homomorphisms.

Now, how do these restriction maps relate to Urbit? They impose a partial order on the set of Azimuth points, where xy if x is the image of y under a restriction map. The Hasse diagram of this partial order is a forest of 256 trees, each of which has a galaxy at its root. This Hasse diagram determines, for each Azimuth point that’s not a galaxy or comet, a unique parent of that Azimuth point. Most planets, for instance, have stars as parents.

The parenthood relationship for comets is different: the parent of a comet is its image under the restriction map uint128 → uint16, which must be one of five stars willing to host comets. Comets are direct hashes of public keys (so, unlike planets, can’t be transferred between different owners).

Planets, stars, and galaxies are tradable as non-fungible tokens on the Ethereum blockchain. This gives a modifiable decentralised mapping from each Azimuth point (32 bits and easily memorable) to an Ethereum address (160 bits and cryptographically secure). As such, it’s possible to trustlessly verify the owner of an Azimuth point and therefore set up a secure communication channel with that Azimuth point; all communications between machines running Urbit OS are established in this manner.

Names of Azimuth points

The integer representing an Azimuth point is converted into a human-readable name (called an ‘@p’) through the following process:

  1. For planets and moons (not for stars and galaxies), a Feistel cipher is applied to the lowest 32 bits of the integer. This is a reversible permutation designed to obfuscate the relationship between the name of a planet and its parent star, reflecting the fact that planets are permitted to move freely between stars.
  2. The bytes of the resulting (enciphered) integer are each converted to a three-letter syllable, using a separate lookup table for even-indexed bytes (suffixes) and odd-indexed bytes (prefixes). Each syllable consists of a generalised vowel flanked by two consonants.
  3. The syllables are written in big-endian, with a hyphen between each 16-bit word and a tilde at the beginning.

An example of a 32-bit planet name is ~sorreg-namtyv, which belongs to the departed founder of Urbit.

A more comprehensive description of the process, including the two 256-element syllable lookup tables, is given in this StackOverflow answer.

In addition to the pronounceable ‘@p’ identifier, there is a visual encoding called a sigil. This stores each byte of the (enciphered!) 32-bit integer in a separate quadrant of a 2-by-2 arrangement, again using a pair of 256-element lookup tables. The sigil for the planet ~firmer-hatful is shown here:

sigil for ~firmer-hatful

Gavin Atkinson describes the design decisions involved in creating this system of sigils. Planets with circular sigils and/or meaningful @p names (e.g. ~timber-walled) tend to be in greater demand, and therefore more expensive, due to the combination of higher scarcity and aesthetic appeal.

The lookup tables mapping bytes to quadrant tiles were populated manually (by drawing 512 tile designs), so the Urbit sigils can be thought of as partially procedurally generated. [A fully procedurally-generated system may also have worked, such as the system in this Wolfram blog post from 2009, but this is unlikely to change now.] This is, nonetheless, highly consistent with the @p names; the syllable lookup tables were also manually populated.

Nock and its emulator

We proceed to turn our gaze from Urbit ID to Urbit OS.

Nock is a low-level pure functional programming language described here. Code and data are simply binary trees (called nouns) whose leaves are nonnegative integers. The Nock interpreter is a unary function * whose domain and codomain are both the space of nouns; its semantics are defined recursively in terms of itself and other unary functions.

My favourite of these functions is the tree-addressing operator, /, which is defined as follows:

/[1 a]              a
/[2 a b]            a
/[3 a b]            b
/[(a + a) b]        /[2 /[a b]]
/[(a + a + 1) b]    /[3 /[a b]]
/a                  /a

In particular, if we let L(x) and R(x) refer to the left and right subtrees of a tree x, then we have:

  • /[1 x] = x
  • /[2 x] = L(x)
  • /[3 x] = R(x)
  • /[4 x] = L(L(x))
  • /[5 x] = R(L(x))
  • /[6 x] = L(R(x))
  • /[7 x] = R(R(x))

and so forth. In general, /[n x] can be evaluated by:

  • expressing n in binary (for example, 1729 is “11011000001”);
  • removing the leading ‘1’ (in this example, giving “1011000001”);
  • replacing each ‘0’ with ‘L’ and ‘1’ with ‘R’ (in this example, “RLRRLLLLLR”);
  • taking x and sequentially applying each of the characters to it (in this example, giving R(L(L(L(L(L(R(R(L(R(x)))))))))), noting that the outermost function is the least significant bit).

If we enumerate the nodes in an infinite binary tree according to the natural number required to address the subtree rooted at that node using the / operator, this agrees with a breadth-first traversal of the tree. This is the same enumeration (the salmon spiral below) that relates the Calkin-Wilf sequence to the Calkin-Wilf tree, shown in this illustration by David Eppstein:

The operator / is useful enough that Lisp distributions typically implement the special-cases of this function for all 2 ≤ n ≤ 15, referring to them as {car, cdr, …, cddddr}.

Nock, if evaluated naively, would be incredibly inefficient: decrementing an integer n takes time proportional to n. The emulator therefore uses a clever idea called jets: runtime optimisations where the emulator pattern-matches various patterns (such as the decrementation function applied to an integer) and calls a native C function to decrement the integer directly.

The Urbit project has succeeded in implementing an entire operating system kernel as a function in this language, and it is performant in practice owing to the use of jets. The Hoon compiler (which compiles programs written in the higher-level language Hoon into the low-level Nock language) generates code that will be successfully pattern-matched by these jets.

With jets, it’s possible to go even lower-level than Nock. Urbit doesn’t do this because there’s not really much point in doing so, except to give a formal definition of Nock in terms of an even simpler language:

Aside: implementing Nock in the SK combinator calculus

If we’re opting for absolute simplicity, why not go the whole way? It’s possible to implement Nock in the untyped lambda calculus or in the SK combinator calculus.

It would be tempting to try the following method of encoding nouns:

  • The nonnegative integers at the leaves of noun trees can be represented using Church numerals: n is the higher-order function which maps its argument f to its n-fold functional composition, f^n.
  • The ‘cons cells’ [a b] can be represented as Church pairs.

However, unfortunately there isn’t a way to query whether a noun is an integer or a pair. As such, we’ll need to use a slightly more complicated encoding of Nock nouns where we ‘tag’ each noun with its type:

  • encode_noun([a b]) := pair(0, pair(encode_noun(a), encode_noun(b)))
  • encode_noun(n) := pair(1, church_numeral(n))

The unary functions used to define the Nock interpreter can now be themselves implemented as combinators, up to and including the Nock interpreter * itself.

For example, the question-mark operator can be implemented by taking the first element of the Church pair (which is the Church numeral 0 or 1) and then ‘boxing’ it as a Nock noun instead of a ‘bare’ Church numeral:

  • ?(x) := pair(1, first(x))

where the ‘pair’ and ‘first’ functions are defined in the Wikipedia article.

The incrementation operator needs to unbox the integer, take its successor, and then rebox it, like so:

  • +(x) := pair(1, succ(second(x)))

Technically, we’ve cheated here, because according to the Nock definition, +(x) needs to also be defined on ordered pairs as well as integers. In particular, we need to use first(x) to extract a Church numeral specifying whether x is an ordered pair or an integer and branch on that: if the Church numeral is 1, then we can just return pair(1, succ(second(x))); if the Church numeral is 0, then we need to return +(x). The self-reference can be removed from this recursive definition by means of a fixed-point combinator.

The remaining operators are all recursive, so they’ll similarly require the fixed-point combinator in their implementations.

It would be a fun exercise to attempt to write a jet-propelled emulator for the SK combinator calculus, including optimisations such as internally representing Church numerals as integers (with a library such as GMP to allow arbitrarily large integers) and using jets to call the GMP implementations of arithmetic operations when appropriate.

By using the ‘compress identical subtrees’ idea, familiar from Hashlife and libraries for manipulating binary decision diagrams, we avoid excessive encoding overhead as well as enabling constant-time recognition of (standard implementations of!) functions such as the decrement operator.

Posted in Uncategorized | 3 Comments

A curious construction of the Mathieu group M11

Previously, we discussed which regular polytopes have vertex-sets that occur as proper subsets of the vertex-set of another regular polytope in the same dimension. In particular, when there is a Hadamard matrix of order 4k, then then the (4k−1)-dimensional simplex can be inscribed in the (4k−1)-dimensional hypercube. Moreover, the converse also holds.

Upon noticing that there exists a unique order-12 Hadamard matrix up to isomorphism, Daniel Sebald asked what the symmetry group is of the 11-dimensional simplex inscribed in the 11-dimensional hypercube. It can be shown that this is exactly the smallest sporadic group, M11, with exactly 11 \times 10 \times 9 \times 8 = 7920.

Another object with a related automorphism group is the perfect ternary Golay code. This is a linear subspace of \mathbb{F}_3^{11} consisting of 729 codewords with the property that any pair of distinct codewords differ in at least 5 coordinates. (Consequently, if you’re given a codeword with ‘errors’ in at most 2 coordinates, you can uniquely recover the original codeword.)

This turns out to be no coincidence! If we label the elements of the field of three elements as {−1, 0, 1}, then exactly 24 of the 729 codewords have no ‘0’-coordinates. These 24 codewords can be regarded as a subset of the vertices of the hypercube {−1, 1}^11, and geometrically they form a pair of mutually dual 11-dimensional simplices inscribed in the hypercube! Eleven dimensions are hard to visualise, so here is a three-dimensional analogue:

Cardboard stella octangula by Joseph Myers

The automorphism group of each individual simplex is the symmetric group S11, but the automorphism group that preserves the bounding hypercube is the Mathieu group M11. If you include both simplices, the groups are doubled (C2 × S11 and C2 × M11, respectively), where the C2 factor is the centre consisting of the identity matrix and its negation.

Posted in Uncategorized | 1 Comment

Assorted topics

This is a digest of things that have happened this month, but which are individually too small to each warrant a separate cp4space post. Firstly, there have been a couple of exciting results in the field of combinatorics:

  • The Erdős-Faber-Lovász conjecture is proved for sufficiently large hypergraphs by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus.
  • A special case of another conjecture by Erdős has been established by Maria Chudnovsky, Alex Scott, Paul Seymour, and Sophie Spirkl.

Also, the functional analysis paper I coauthored with Tomasz Kania has been accepted by the journal Studia Mathematica. This is my first ever article in a peer-reviewed mathematics journal, finally giving me a finite Erdős number (specifically 4)! Many thanks go to the anonymous reviewer for his or her feedback.

The article on the Stockfish NNUE architecture reached the top position on Hacker News, resulting in a disproportionate amount of cp4space traffic (indeed, more views in a single hour than is typical in an entire week).

One quadrillion objects

The Catagolue census of objects arising from random 16-by-16 soups in Conway’s Game of Life has surpassed 10^15 objects. The total number of soups explored in this census is 46 trillion. One of these soups, discovered by Dylan Chen, takes a record-breaking 52513 generations to stabilise.

The parallel GPU-assisted soup search has examined nearly four times as many soups, specifically 164 trillion soups, but due to the search methodology not every object is censused. (Specifically, only soups that last sufficiently long or produce high-period oscillators or rare spaceships are rerun on the CPU and censused; ‘boring’ soups are discarded by the prefiltering stage on the GPU.)

Recent highlights of the GPU soup search include two variants of a period-7 oscillator, one by me and the other by Rob Liston. There was also a 50093-generation soup by Liston which held the longevity record for one week before being surpassed by Dylan Chen’s aforementioned 52513-generation soup.

Taken together, the CPU and GPU searches have simulated 210 trillion random 16-by-16 soups; you can view the collective results here.

BN curves where n has low Hamming weight

We previously discussed Barreto-Naehrig curves and the problem of trying to find curves where the (prime) number of points n on the elliptic curve has a low Hamming weight.

If x is the sum of two powers of 2, the Hamming weight of n is guaranteed to be at most 35, and heuristics based on the prime number theorem suggest that there should be infinitely many such values of x for which p and n are both prime. For example, x = 2^{250} + 2^4 is an example.

The situation is different for Hamming weights strictly below 35. Instead of a two-parameter family such as x = 2^a + 2^b, there appear to only be a finite collection of one-parameter families, and the same heuristics suggest that there are only finitely many examples. In particular, the largest such x that I could find was x = 33 \times 2^{267}, for which the Hamming weight of n is 34.

A particularly nice choice of x (in that it gives a reasonable security level without being too large) is x = 47 \times 2^{56}. The resulting values of n and p are both 252-bit primes, and the Hamming weight of n is 35. Here are the values in hexadecimal:

x = 0x2f00000000000000
p = 0xa787d240000000039081c0000000000cf180000000000011a00000000000001
n = 0xa787d240000000039081c00000000009b520000000000011a00000000000001

If you’re willing to have a smaller bit-length, then x = 17 \times 2^{43} provides a 194-bit prime where the Hamming weight of n is merely 29. Also, because x is congruent to 1 (mod 3), it follows that p is congruent to 4 (mod 9) and cube-roots can be computed efficiently in \mathbb{F}_p as described in Appendix B of the paper introducing BN curves:

x = 0x880000000000
p = 0x2de124000000565c800000006c60000000003300000000001
n = 0x2de124000000565c800000005148000000003300000000001

The security level is quite mediocre, though: it only offers 96-bit security against Pollard’s rho algorithm for discrete log.

Posted in Uncategorized | Leave a comment

Meagre sets and null sets

There are two competing notions for describing a subset of the real numbers as being ‘small’:

  • a null set is a subset of the reals with Lebesgue measure zero;
  • a meagre set is a countable union of nowhere-dense sets.

Both of these properties are downward-closed: an arbitrary subset of a null set is itself a null set, and an arbitrary subset of a meagre set is again a meagre set. Moreover, countable unions of meagre sets are meagre, and countable unions of null sets are null.

These two notions of a set being ‘small’ are also wholly incompatible.

In particular, there exist fat Cantor sets, nowhere-dense closed subsets of the unit interval which have positive Lebesgue measure arbitrarily close to 1. If you take a countable union of these sets (say, with Lebesgue measures of 1/2, 3/4, 7/8, 15/16, and so forth), the result is a meagre subset of [0, 1] with Lebesgue measure 1. If you take a countable union of translates of this set, each one occupying [n, n+1] for each integer n, the result is a meagre subset of the reals whose complement is a null set.

Stated more succinctly, there is a meagre set A (the one we’ve just constructed) and a null set B (its complement) such that A and B are disjoint and their union is the whole real line.

Moreover, A is an F_{\sigma} set (countable union of closed sets) and B is a G_{\delta} set (countable intersection of open sets). It’s possible to prove that every meagre set is a subset of a meagre F_{\sigma} set, and likewise every null set is a subset of a null G_{\delta} set. This turns out to be an ingredient of the proof of…

The Erdős-Sierpiński duality theorem

From what has been mentioned so far, there seems to be some abstract ‘duality’ between null and meagre sets. Erdős and Sierpiński proved, conditional on the continuum hypothesis, a beautiful result that makes this precise:

There exists an involution (self-inverse bijection) f on the set of reals such that {f(x) : x in C} is null if and only if C is meagre, and {f(x) : x in D} is meagre if and only if D is null.

This involution f is highly discontinuous, being constructed using transfinite induction up to 2^{\aleph_0} (which, by assuming the continuum hypothesis, is also equal to the first uncountable cardinal \aleph_1). Shingo Saito describes the construction in detail.

Posted in Uncategorized | Leave a comment

Keep your public keys private

Yes, the title sounds very counterintuitive. After all, don’t digital signature schemes require the general public to know your public key so that they can verify your signatures?

That is correct, but importantly they don’t need to know your public key until the very moment that you actually want to sign something. Instead, you can publish a hash of your public key long beforehand, and only publish your public key at the moment you digitally sign the message. The verifier then checks that the digital signature is valid using that public key, and then also checks that the public key is consistent with the hash you published.

The pseudonymous inventor of Bitcoin, Satoshi Nakamoto, must have realised this when he or she wrote the whitepaper. A Bitcoin address (to which you send coins) is a RIPEMD-160 hash of a SHA-256 hash of the elliptic curve public key. Importantly, this hash is not the same thing as the public key itself.

Why does this matter? It turns out that the hash offers a much stronger level of security than the elliptic curve:

In particular, if an attacker knows only your Bitcoin address, then there’s no easier way for an attacker to steal your coins than by brute-force: generate a random private key, derive the public key, hash it to obtain an address, and see if it matches. It would take an expected 2^160 iterations of this approach to steal your coins.

Note that the attacker will almost certainly end up with a different private key from the one you created, but it will ‘coincidentally’ be able to unlock the coins in the same address. This is because the 160-bit space of addresses is 2^96 times smaller than the 256-bit space of elliptic curve keys.

What if an attacker knows your elliptic curve public key? Then, using the Pollard rho algorithm, it ‘only’ takes on the order of 2^128 iterations to determine your private key. That’s still humongous; even with all of the computing power currently available on the planet, it would take millions of years to reverse-engineer your private key.

So why should you be concerned? There are two reasons:

  • There’s a polynomial-time quantum algorithm for breaking elliptic curve discrete logarithm. It uses the same ‘period-finding’ subroutine as Shor’s factorisation algorithm. It’s still far beyond current quantum computing technology, with the best published upper bound requiring 128 billion Toffoli gates to break a 256-bit elliptic curve discrete logarithm, but quantum computing progress has been accelerating in recent years.
  • More of a concern is that Pollard’s rho algorithm might not be the best classical algorithm for solving the elliptic curve discrete logarithm problem. Prime factorisation, for example, became much easier as recently as 1996 when the General Number Field Sieve was invented (also by Pollard). It’s plausible that a secret governmental organisation has a faster method of cracking elliptic curve discrete logarithm.

If you’re unconvinced that this second reason is even remotely plausible, and strongly believe that Pollard rho is obviously the best possible algorithm for solving elliptic curve discrete logarithm, then you should ask yourself why serious cryptographers such as Joe Silverman are bothering to develop alternative methods.

(Before you dismiss this as an argumentum ad verecundiam, note that this is not purporting to be an argument that elliptic curve discrete logarithm is definitely insecure. Rather, it’s an argument that there’s no known proof that it is secure, because if such a proof did exist, then there would have been no point in Silverman proposing an approach that is guaranteed to fail.)

So, 2^128 iterations should be regarded as ‘an upper bound, which is tight assuming that there will be no academic, industrial, or governmental progress on finding faster algorithms’. And if a large-scale fault-tolerant quantum computer is developed, be very afraid…

On the other hand, cryptographic hash functions such as SHA-256 and RIPEMD-160 (both of which are composed to derive the Bitcoin address from the public key) are designed to thoroughly mush the input in as chaotic manner as possible*. They have no nice mathematical structure by design, so it’s very unlikely that there’s a better approach than the 2^160-iteration brute-force algorithm described above.

*that’s not to say that hash functions are just kitchen sinks of arbitrary operations. They’re still very carefully designed to have good dispersion properties, be resistant to a bunch of different cryptographic attacks, produce statistically random output, and achieve these goals efficiently (in terms of speed in software/hardware implementations).

The take-home message

To reiterate:

  • if you share your public key, then your security level is “hopefully 128 bits, but maybe some organisation has found a faster method and is keeping it quiet”;
  • if you don’t share your public key until you absolutely need to (when signing a transaction), then your security level is “almost definitely 160 bits”.

You should feel much more confident if your Bitcoin is in an address where the world doesn’t know your public key. This means that whenever you spend any Bitcoin from an address (inevitably revealing the public key in the process), you should empty the entire address and send the remaining balance to a fresh unused address.

You’re still revealing your public key, but only very briefly: as soon as the block containing that transaction is confirmed, it becomes incredibly computationally difficult to rewrite the history. In essence, it only gives an evil adversary a maximum of a few minutes to try to break the discrete logarithm problem.

Many wallet implementations create fresh addresses for every transaction, so my recommendation is to use one of those (e.g. the Trezor hardware wallet).

Since ‘not reusing addresses’ is already known to be best practice, then you might be tempted to ask:

Is this advice really necessary?

Apparently so.

At the time of writing, someone has 94258 Bitcoin* in a regular (pay-to-public-key-hash) address which has revealed its public key. So, if you are reading this and are the owner of 1P5ZEDWTKTFGxQjZphgWPQUpe554WKDfHQ, then I’d recommend moving the balance to a fresh address imminently.

*that’s about 3 billion dollars.

For example, if you look at one of the recent outgoing transactions, the ‘sigscript’ is the following:

473044022100ff4fa243701eea0bdba8615a8bc4f01df387
aae215a90fe382eaf5405ee0ad73021f6f814e02b88e1fbb
1b0d5099970da6cf426ef58027c4b99b1afb633bb5b62801
21024b83426cf9bff257261d87f2f2858b51b2eea756c0123c7e05bc0a007425c9f2

The last line here (68 hexadecimal characters, i.e. 34 bytes) contains 0x21 (meaning ’33’) followed by the 33-byte public key (a point on the elliptic curve). That is to say, there’s currently 3 billion dollars resting in an oft-reused Bitcoin address with an exposed public key, protected only by the difficulty of the elliptic curve discrete logarithm problem.

That’s a very large and unnecessary bet to be making against the collective innovative capability of the world’s algebraic geometers and cryptographers…

Posted in Bitcoin | Leave a comment

The neural network of the Stockfish chess engine

Last time, we briefly mentioned the high-level differences between Stockfish and Leela Chess.

To recap, Stockfish evaluates about 100 million positions per second using rudimentary heuristics, whereas Leela Chess evaluates 40 000 positions per second using a deep neural network trained from millions of games of self-play. They also use different tree search approaches: Stockfish uses a variant of alpha-beta pruning, whereas Leela Chess uses Monte Carlo tree search.

An important recent change to Stockfish was to introduce a neural network to evaluate the positions in the search tree, instead of just relying on hardcoded heuristics. It’s still much simpler than Leela Chess’s neural network, and only slows down Stockfish to exploring 50 million positions per second.

The real cleverness of Stockfish’s neural network is that it’s an efficiently-updatable neural network (NNUE). Specifically, it’s a simple feedforward network with:

  • a large (10.5M parameters!) input layer, illustrated below, that can utilise two different levels of sparsity for computational efficiency;
  • three much smaller layers (with 17.5k parameters in total) which are evaluated densely using vector instructions;
  • a single scalar output to give a numerical score for the position, indicating how favourable it is for the player about to move.

Everything is done using integer arithmetic, with 16-bit weights in the first layer and 8-bit weights in the remaining layers.

The input layer

Let’s begin by studying the first — and most interesting — layer. Here’s an illustration I made using Wolfram Mathematica:

The inputs to the layer are two sparse binary arrays, each consisting of 41024 elements. It may seem highly redundant to encode a chess position using 82048 binary features, but this is similar to an approach (called ‘feature crosses’) used in recommender systems.

What are the two sparse binary arrays, and why do they have 41024 features? My preferred way of interpreting these two arrays are as the ‘worldviews’ of the white king and the black king. In particular, the differences are:

  • Coordinate systems: because black and white pawns move in opposite directions, the two players need to ‘see the board from opposite angles’. This already happens in regular (human) chess because the two players are seated opposite each other at the chessboard, so one player’s ‘top’ is the other player’s ‘bottom’. If you imagine each player numbering the squares from top to bottom, left to right, then any physical square would be called n by one player and 63 − n by the other player.
  • Piece types: instead of viewing the pieces as black or white, a player sees it as either ‘mine’ or ‘theirs’. The 10 non-king piece types are thus {my pawn, their pawn, my knight, their knight, my bishop, their bishop, my rook, their rook, my queen, their queen}.

The reason for these ‘player-relative coordinate systems’ is that it means that Stockfish can use the same neural network irrespective of whether it’s playing as white or black. The network uses both your king’s worldview and your enemy king’s worldview for evaluating a position, because they’re both highly relevant (you want to protect your own king and capture your enemy’s king).

So, why does each worldview have 41024 features? It can be seen as an outer product (or tensor product) of:

  • a 64-element feature encoding the position of the king whose worldview this is, in their own coordinate system. This is ‘one-hot encoding’, where exactly one of the 64 entries is ‘1’ and the other 63 entries are ‘0’.
  • a 641-element feature encoding, for each of the 64 × 10 ordered pairs (square, piece-type), whether or not that square is occupied by that piece. The 641st element is unused, and is (according to the Chess Programming Wiki) apparently a result of the network being ported from Shogi to chess.

Each of the two 64-by-641 outer product matrices* is then flattened (the matrix is ‘reshaped’ into a vector with the same entries) to yield the corresponding 41024-element ‘sparse worldview’. In the input layer, each of the two 41024-element sparse worldviews are then affinely transformed to form a 256-element ‘dense worldview’.

*Important note: the 41024-element sparse binary arrays are never explicitly materialised, either as a 64-by-641 matrix or as a 41024-element vector. The Stockfish NNUE effectively ‘fuses’ the construction of these sparse vectors with the subsequent affine transformation (described below), updating the 256-element dense worldviews directly when the configuration of pieces on the chessboard is modified.

There are two levels of sparsity which are utilised when computing this affine transformation from \mathbb{R}^{41024} to \mathbb{R}^{256}, allowing the network to be efficiently evaluated many times in a tree search:

  • the 41024-element implicit vectors are themselves sparse: the number of nonzero elements is equal to the number of non-king pieces on the board.
  • moving a piece typically changes very few of the entries of the vector: if it’s a regular non-king move, only 2 entries change; if it’s a non-king move with capture, then 3 entries change.

It’s this second aspect which warrants the name ‘efficiently updatable’: when a move is made (or unmade, since we’re doing a tree search), we only need to add/subtract a few 256-element matrix columns from the resulting ‘dense worldview’ to update it.

Unless a king is moved, this (2 or 3 vector additions/subtractions) beats summing all of the matrix columns corresponding to nonzero entries (up to 30 vector additions), which in turn unconditionally beats doing a regular dense matrix-vector multiplication (41024 vector additions). That is to say, the second-level sparsity is about 10 times more efficient than the first-level sparsity, which is in turn about 1000 times more efficient than naively doing a dense matrix-vector multiplication.

The two dense worldviews are concatenated according to which player is about to move, producing a 512-element vector, which is elementwise clamped to [0, 127]. This elementwise clamping is the nonlinear activation function of the input layer, and (as we’ll describe) the hidden layers use a similar activation function. We can think of this as a ‘clipped ReLU’, which is exactly what the Stockfish source code calls it.

The remaining layers

The two hidden layers each use 8-bit weights and 32-bit biases. The activation function first divides the resulting 32-bit integer by 64 before again clamping to [0, 127], ready to be fed into the next layer. The output layer also uses 8-bit weights and 32-bit biases, but with no nonlinear activation function.

The first hidden layer takes 512 inputs (the clamped concatenated worldviews) and produces 32 outputs. The second hidden layer takes those 32 values as inputs, and again produces 32 outputs. The output layer takes those 32 values as inputs, and produces a single scalar output.

Since these subsequent layers are applied to dense vectors, they can’t use the same ‘efficiently updatable’ approach as the input layer; that’s why they’re necessarily substantially smaller. They can, however, use hardware vectorisation instructions (SSE/AVX) to apply the linear transformation and activation function.

This scalar output is then further postprocessed using other Stockfish heuristics, including taking into account the 50-move rule which isn’t otherwise incorporated into the evaluation function.

Observe that the neural network doesn’t actually have complete information, such as whether a pawn has just moved two squares (relevant to en passant), whether a king is able to castle, and various other information pertaining to the rather complicated rules for determining a draw. This is okay: the network is only being used as a cheap approximate evaluation for a position; when deciding what move to make, Stockfish performs a very deep tree search and only uses this approximate evaluation in the leaves.

Equivalently, you can think of this as being a massive refinement of the approach of ‘look a few moves ahead and see whether either player gains a material or positional advantage’, using a neural network as a much more sophisticated position-aware alternative of crudely ‘counting material’.

This is a neural network, so the weight matrices and bias vectors of the layers were learned by training the network on millions of positions and using backpropagation and a gradient-based optimiser. Of course, for supervised learning, you need a ‘ground truth’ for the network to attempt to learn, which seems somewhat circular: how do you even gather training data? The answer is that you use the classical version of Stockfish to evaluate positions using the deep tree search, and use that as your training data.

In theory, you could then train another copy of the NNUE using the NNUE-enhanced Stockfish as the evaluation function, and then iterate this process. Leela Chess does the same thing: its current network is trained on positions evaluated by using deep lookahead with the previous network as the leaf evaluation function. Note that the construction of the training data is orders of magnitude more expensive than training the network with this data, as you’re doing thousands of evaluations of the previous network (owing to the deep tree search) to construct each item of training data for the new network.

Further reading

The network is described and illustrated on the Chess Programming Wiki, which also has tonnes of references to forum discussions and other references. The first description of an NNUE was a Japanese paper by Yu Nasu (who suggested it for the board game Shogi instead of chess); the paper has since been translated into English and German. There’s also the Stockfish source code, which is very well organised (there’s a directory for the NNUE) and clearly written.

Posted in Chess | 9 Comments

Rigid heptagon linkage

In January 2000, Erich Friedman considered the problem of finding a rigid unit-distance graph G containing a regular heptagon as a subgraph. That is to say, the graph is immersed in the plane such that:

  • every edge of G must have unit length;
  • there exists some ε > 0 such that any ε-perturbation of the vertices of G results in at least one of those edges having non-unit length;
  • G contains a subgraph H isomorphic to a 7-cycle, such that the vertices and edges of H are those of a regular heptagon.

Last year, Watanabe Masaki discovered a solution where G has only 59 edges:

Watanabe Masaki’s 59-edge solution (blue lines represent edges)

On 19th December, Ed Pegg asked on Math StackExchange whether the following 42-edge configuration (invariant under an order-7 cyclic group of rotations, same as the heptagon) is rigid:

Ed Pegg’s 42-edge candidate graph

Jeremy Tan and William R. Somsky independently and concurrently proved that Pegg’s graph is indeed rigid, and that some of the vertices are redundant: it can be reduced to a 35-edge subgraph with the same rigidity property, significantly improving on Masaki’s upper bound of 59:

Jeremy’s 35-edge subgraph

Tan won the race by a narrow margin, posting a proof whilst Somsky was still writing up his solution. The proof also mentions that this graph is minimal (no proper subgraph is a solution), but it is an open problem whether the graph is minimum (has the smallest number of edges amongst all solutions).

This raises the question: can you find a solution with 34 or fewer edges?

In the other direction, what lower bounds can you find? A minimal example must be a Laman graph and therefore have 2n − 3 edges where n is the number of vertices. Each additional vertex can only be adjacent to at most 2 of the original vertices (by unit-distance-ness), so we have:

2n − 3 = (number of edges) <= ((n − 7) choose 2) + 2(n − 7) + 7

which gives a lower bound of 4 new vertices (11 vertices total) and 19 edges. The gap between my rather weak lower bound of 19 and Jeremy Tan’s upper bound of 35 is vast; can we narrow it in either direction?

Other news

Siobhan Roberts has published an article in the New York Times for the 50th anniversary of Martin Gardner’s initial publication of Conway’s Game of Life. The article mentions several recent discoveries, including the period-21 spaceship discovered by John Winston Garth using the search program ikpx2 described in a previous post.

Around the same time the article was published (and therefore too late to be mentioned therein), Dylan Chen discovered a 3c/7 spaceship he named ‘Anura’. It is the second elementary 3c/7 spaceship to be discovered (after Tim Coe’s spaghetti monster) and by far the smallest (313 cells, as opposed to 702 for the spaghetti monster).

Dylan Chen’s new 3c/7 spaceship, Anura.

Posted in Uncategorized | Leave a comment

Barreto-Naehrig curves and cryptographic pairings

There’s a very elegant cryptographic construction discovered by Barreto and Naehrig in a 2005 paper. It is beautiful from a pure mathematical perspective, but also has an impressive application: it was* part of the ingenious mechanism by which Zcash supports publicly-verifiable private transactions.

‘Publicly-verifiable private transactions’ sound paradoxical: they’re transactions where the inputs and outputs are cryptographically obfuscated (so no-one can see how much money is changing hands), but where it’s still possible for a member of the public to verify that all inputs and outputs are nonnegative and that the sum of the inputs equals the sum of the outputs (so there’s no ‘cheating’ happening).

If you’re not amazed by this, then I haven’t explained it properly: all account balances are encrypted, and the amounts of money changing hands in a transaction remains completely encrypted, but the transactions include a certificate which proves that the inputs and outputs (which are encrypted) satisfy nonnegativity and conservation of money. No information about the inputs and outputs is revealed, other than the fact that the transaction doesn’t ‘cheat’. This is an example of a ‘zero-knowledge proof’, and I still find it completely and utterly surprising that these things are even possible.

The rest of this article will attempt to explain one of the key mathematical ingredients (a construction of cryptographic pairings) and then briefly outline how it fits into these zero-knowledge proofs used in Zcash.

*Zcash changed their curve in 2017 from a Barreto-Naehrig curve to a BLS curve. BLS curves end up being marginally more efficient (higher level of security for a given size of elliptic curve), which is why the programmers made the change. The principle is still the same, namely constructing an elliptic curve which supports ‘pairings’, but there are various quantitative differences between Barreto-Naehrig and BLS curves. The reason for concentrating on Barreto-Naehrig curves is that they’re somewhat simpler and more aesthetically pleasing.

What are elliptic curves?

[If you know about the elliptic curve group law and the Cayley-Bacharach theorem from algebraic geometry, feel free to skip this section.]

The points on an elliptic curve (cubic curve with no singularities) can be made into an Abelian group. In particular, we define an operation by the following:

  • take an elliptic curve Γ;
  • choose one of its points of inflection and take that to be the ‘identity point’, 0;
  • assert that P + Q + R = 0 whenever there exists a line which intersects the curve at P, Q, and R. (Note that if there’s a repeated point, e.g. P = Q, then the line must have a ‘double intersection’ (point of tangency) at that point. Similarly, P + P + P = 0 if and only if P is a point of inflection; that’s why the identity point 0 must be one of the points of inflection.)

Traditionally, people often study elliptic curves in Weierstrass normal form, where the equation of the curve is y^2 = x^3 + ax + b. The identity point 0 is then typically chosen to be the point ‘at infinity’ where the curve intersects the line at infinity on the projective plane.

To show that this works as a definition, we firstly need to see that it’s well defined. In particular, given two points P and Q, how do we know that the third intersection R with the line ℓ through P and Q is uniquely determined?

In particular, ℓ is a linear equation, so we can use it to express x in terms of y or vice-versa. Substituting into the elliptic curve equation Γ gives a cubic equation in one variable. We know that this cubic equation has two roots (those corresponding to P and Q), so we can divide the cubic by those linear factors to determine the third root (corresponding to R).

Note that we didn’t assume that the ambient field was algebraically closed. This is important, because cryptographers use elliptic curves over finite fields, and a finite field cannot be algebraically closed.

This gives the following procedure for ‘adding’ two points:

  • draw a line through P and Q and let it intersect the curve again at R;
  • draw a line through R and 0 and let it intersect the curve again at S.

Then S is the sum of P and Q. This operation is commutative (interchanging P and Q does not affect the result), but how do we know that it’s associative? In particular, given the following diagram from a talk I gave in Winchester a couple of years ago, how do we know that the elliptic curve (blue), orange line, and green line all mutually intersect at the bottom-left?

It’s painful to verify this algebraically, but it follows immediately from the Cayley-Bacharach theorem. We’ve previously discussed this theorem, along with several miscellaneous applications to Euclidean geometry.

Elliptic curve cryptography and the discrete logarithm problem

There are two reasons why elliptic curve cryptography requires the use of a finite field instead of the real or complex numbers. One reason is practicality: there are uncountably many reals, most of which cannot be described, and therefore they cannot feasibly be represented on a computer.

The other reason is security: the Weierstrass elliptic function allows you to construct an isomorphism between the elliptic curve and a torus (specifically, the complex plane quotiented out by an appropriate lattice), and the ‘discrete logarithm problem’** on a torus is trivial; you can solve it efficiently using continued fractions.

**given two points, A and B, determine an integer m such that mA = B, where mA := A + A + … + A just means ‘the point A added to itself m times’.

On the other hand, for elliptic curves over a finite field, there is no known way to efficiently solve the elliptic curve discrete logarithm problem on a classical computer; this is how HTTPS and Bitcoin digital signatures remain secure. On a quantum computer you can use Shor’s algorithm, but you’d need a fault-tolerant 2330-qubit quantum computer with 128 billion Toffoli gates to break the 256-bit curve ‘secp256k1’ used for Bitcoin digital signatures, and that seems to be beyond the current technological capabilities of large commercial*** organisations.

***I doubt governments have this technology either. After all, they use Excel spreadsheets as a database for tracking the spread of a pandemic.

Anyway, note the following asymmetry:

  • It is very easy, given a large number m and a point G, to compute the point mG: using a ‘double-and-add’ procedure, you can do it in log2(m) ‘doubles’ and H(m) ‘additions’, where H(m) is the number of ‘1’-bits in the binary expansion of m. This procedure was how Ancient Egyptians multiplied ordinary integers.
  • On the other hand, it’s infeasible to go the other way. Given points G and mG, there is no known classical algorithm to reverse-engineer this to extract the original number m, without performing an amount of work proportional to the square-root of the number of points on the elliptic curve. Bitcoin’s elliptic curve has roughly 2^256 points, so it would take about 2^128 operations to steal a private-key using the Pollard rho algorithm.

G is just a global ‘base point’ on the elliptic curve Γ which (together with the curve itself) is a public parameter of the cryptosystem. The order n of G (smallest integer such that nG = 0) must be prime. Then we have an isomorphism from [the additive group of] \mathbb{F}_n to the elliptic curve:

\mathbb{F}_n \rightarrow \Gamma

m \mapsto mG

which is cryptographically irreversible.

[Note: the prime order n of the base point G is not the same as the prime order p of the field in which the elliptic curve itself lives. If G is a generator of the curve, then p and n will be relatively close but not necessarily equal. In the case where they are equal, the elliptic curve is called anomalous, and it has inherent weaknesses.]

Elliptic curve digital signatures

The existence of this one-way function enables various important cryptographic primitives to be built on top of elliptic curves, such as the aforementioned elliptic curve digital signature algorithm (ECDSA) used by HTTPS and Bitcoin. In ECDSA, Alice’s private key is some integer m \in \mathbb{F}_n, and her public key is the corresponding point mG on the elliptic curve Γ.

To sign a message M, Alice firstly computes a cryptographic hash z of M. In general, a cryptographic hash is a fixed-length bitstring (for SHA-256, it consists of 256 bits). In this case, we interpret z as an element of \mathbb{F}_n by interpreting it as a binary number and reducing modulo n.

Then, Alice computes a single-use cryptographically secure random number k, also in the field \mathbb{F}_n, and reveals the following:

  • the abscissa (x-coordinate) r of the curve point kG;
  • the value s = k^{-1}(z + rm) \mod n.

Neither r nor s is allowed to be zero; if this happened (incredibly unlikely!), Alice should generate a new value k and try again. These data (r, s) together form the digital signature for the message M. Bob can verify that Alice created the signature by computing the cryptographic hash z and checking that the curve point (r/s)(mG) + (z/s)G has abscissa r. This only requires Bob to know the public-key mG, not the private-key m.

ECDSA key-pairs can be reused to sign many messages, but you must generate a different random number k each time you sign a message. Otherwise, it’s possible for someone to determine your private-key. Indeed, several bitcoins were stolen by attackers as a result of a poor random number generator on early versions of the Android operating system.

Cryptographic pairings

A cryptographic pairing on an elliptic curve Γ is a bilinear map from Γ × Γ’ to [the multiplicative group of] some field F, where Γ’ is another elliptic curve isomorphic to Γ and related by a ‘twist’ (explained here). That is to say that:

\langle aP, bQ \rangle = \langle P, Q \rangle^{ab}

where P, Q are curve points on Γ, Γ’ (respectively) and a, b are integers. Note that the existence of a cryptographic pairing means that the elliptic curve discrete logarithm (hard) on Γ can be transported to the ordinary discrete logarithm (not quite as hard for a given size) on the field F. As such, the field F needs to be substantially larger than the curve Γ, lest it be the Achilles heel of the cryptosystem.

The field F is a finite field \mathbb{F}_{p^k}, whose characteristic p matches that of the ambient field \mathbb{F}_p in which the elliptic curve Γ lives. The minimal degree k for which a pairing exists is called the embedding degree of the elliptic curve.

For Bitcoin’s curve, the embedding degree is humongous (comparable to the number of points on Γ), which makes the pairing impossible to use. On the other hand, if k were very small (e.g. 1 or 2), the discrete logarithm in F would be much weaker than the discrete logarithm in Γ, so you’d need a massive elliptic curve to attain a desired level of security, and that would come at a computational cost.

The Barreto-Naehrig curves are a family of elliptic curves with a good embedding degree k = 12, so you can (for example) have a 256-bit elliptic curve with a 3072-bit embedding field F. This is what Zcash previously used, but it transpires that 3072-bit discrete logarithm is potentially slightly weaker than the desired security level. This means you’d want to use a slightly larger elliptic curve (384 or 512 bits), with a corresponding 4608- or 6144-bit embedding field F, respectively.

Details of the Barreto-Naehrig construction

The size of a Barreto-Naehrig curve is parametrised by an integer x. The values p and n are quartic polynomials in the value x:

  • p = 36 x^4 + 36 x^3 + 24 x^2 + 6 x + 1
  • n = 36 x^4 + 36 x^3 + 18 x^2 + 6 x + 1

Observe that the difference between them is only 6 x^2, which is slightly lower than the square-root of p (or n), consistent with Hasse’s bound. The validity of the construction relies on the fact that n is a factor of the 12th cyclotomic polynomial evaluated at pn = 6 x^2:

In[5]:= InputForm@Factor@Cyclotomic[12, 6 x^2]

Out[5]//InputForm= 
(1 - 6*x + 18*x^2 - 36*x^3 + 36*x^4)*(1 + 6*x + 18*x^2 + 36*x^3 + 36*x^4)

We need to choose a value of x such that these two numbers are prime; the first few such positive values of x are:

1, 5, 6, 7, 20, 78, 82, 123, 166, 169, 173, 202, 257, 295, 308, 321, 420, 438, 448, 460, 487, 543, 596, 650, 720, 798, 810, 811, 833, 845, 869, 872, 921, 981, …

which apparently isn’t yet in the OEIS. (I’ll add it.)

Of course, those values of x are far too small for cryptography. If you want a 256-bit elliptic curve, then you’ll want to choose x to be slightly lower than 2^64. By the prime number theorem, if you choose a random x you have a probability of 1/log(x)² that the numbers p and n will be prime.

After you’ve chosen a suitable x which passes both primality checks for p and n, you need to build the curve itself. Rather like Bitcoin’s elliptic curve secp256k1, the coefficient ‘a‘ in the equation y^2 = x^3 + a x + b is zero. [Note: the parameter x is not the same as the coordinate x; they just happen to have the same name.]

To determine b, you just keep trying successive values until you find one that works, as described in their algorithm:

Once you have the curve, how do you compute the pairing? There’s an algorithm by Miller (1986) for efficiently computing Tate/Weil pairings on arbitrary elliptic curves, and a paper by Devegili, Scott, and Dahab (2007) describes an optimised implementation of Tate and Ate pairings specifically for Barreto-Naehrig curves.

Interestingly, the paper makes the following comment:

Furthermore, for efficiency reasons in the pairing computation it is desirable to generate curves of prime order n such that n has a low Hamming weight. Constructing such curves for k = 12 or φ(k) > 4 is still a research problem.

The best choice of parameter I could find using the Barreto-Naehrig construction was x = 3(2^75 + 1), which results in n having 312 bits of which only 36 are nonzero.

Why are pairings useful?

They’re useful because they allow more computations on encrypted data.

Simply put, in the same way that an elliptic curve supports addition of numbers that have been encrypted as points, a pairing supports multiplication of encrypted numbers. It’s somewhat restricted, because the ‘product’ of two points belongs to F instead of Γ (i.e. it has a different type from those of the two multiplicands), so you can’t directly compute an encrypted product of three or more encrypted numbers. This is why pairings fall short of fully homomorphic encryption.

Despite this constraint, it’s still possible to take your desired computation (expressed as an arithmetic circuit) and compile it into a system of constraints that can be verified using pairings.

There’s an excellent explanation of zk-SNARKs here, which pedagogically illustrates this property of a pairing using the following diagram:

Illustration by Maksym Petkus from his article, Why and How zk-SNARK Works: Definitive Explanation

Petkus’s explanation abstracts away the particular choices of cryptographic primitives (the words ‘elliptic curve’ being mentioned only once in the whole article), but it’s useful additional context to know that the ‘Source set’ above is the elliptic curve Γ and the ‘Output set’ is the much larger embedding field F.

In addition to Petkus’s explanation, I’d strongly recommend also reading this series of blog posts by Ariel Gabizon.

Posted in Uncategorized | 6 Comments

Shallow trees with heavy leaves

There are two very different state-of-the-art chess engines: Stockfish and Leela Chess Zero.

  • Stockfish searches many more positions (100 000 000 per second) and evaluates them using computationally cheap heuristics. The tree search methodology is a refinement of alpha-beta pruning.
  • Leela Chess Zero, a descendant of DeepMind’s AlphaZero, searches much fewer positions (40 000 per second), using a deep convolutional neural network (trained through millions of games of self-play) to evaluate positions. It has a different search methodology, namely Monte Carlo tree search.

The neural network employed by Leela Chess Zero is itself fascinating, using an architecture similar to the SE-ResNet image classification model. This residual tower of squeeze-and-excitation layers feeds into separate ‘value’ and ‘policy’ heads, for evaluating the position and deciding what to do next, respectively.

However, I want to talk more about the general strategy of searching much fewer positions and expending more effort on each position. In particular, this major difference between Stockfish and Leela Chess Zero is reflected in two of the many search programs used to find spaceships in Conway’s Game of Life and related cellular automata:

  • The program ntzfind, originally written by a pseudonymous author ‘zdr’ and enhanced by Matthias Merzenich, Aidan Pierce, and Tom Rokicki, is a depth-first tree search which uses a huge in-memory lookup table to find all possible choices for the next row based on previous rows.
  • The new program ikpx2, adapted from the program I wrote to find the first elementary knightship, is more analogous to Leela Chess Zero in that its search tree is much smaller, but the amount of work done at each node is much greater.

In particular, ikpx2 uses a SAT solver to perform a deep lookahead to determine whether the current partial spaceship can be fully extended for several* more rows, and is therefore not close to a ‘dead end’. By comparison, ntzfind can only rule out a partial by performing a depth-first traversal of the entire subtree.

*this is configurable, but a typical value is 30.

The SAT solvers used are Armin Biere’s kissat and CaDiCaL. Kissat is more optimised than CaDiCaL, but doesn’t support incremental solving yet. As such, ikpx2 tries to learn (using basic reinforcement learning, specifically a multi-armed bandit model) which SAT solver is more appropriate for a given task. Theoretically, we could add additional state-of-the-art SAT solvers such as Mate Soos’s cryptominisat, and use a more powerful reinforcement learning model (such as a neural network) to evaluate the subproblem and decide which solver to use.

Having this ‘shallow tree with heavy leaves’ means that the current search progress can easily fit entirely within memory. Also, parallelism is easy: we keep a priority queue of tasks (partial spaceships) and have many CPU threads which do the following:

  1. remove a task from the queue;
  2. solve it using SAT solvers;
  3. communicate the results back to the main orchestrating thread.

The results that are sent back to the main orchestrating thread are partials that have been extended (for 30 or so rows). A small initial segment of those rows are added to the in-memory tree as nodes; the remaining rows are not, as otherwise we’d end up with the same full unpruned search tree used by ntzfind and therefore our advantage is completely annihilated.

What happens to the remaining rows? Instead of merely discarding them, we take the entire extended partial and simulate it in the cellular automaton! Sometimes these partials evolve into an object that the main search couldn’t have found: for example, the tail might have a higher period than the rest of the spaceship, or leave a trail of debris behind. This idea seems to have first been explored by Paul Tooke in 2001, before being rediscovered 20 years later in the context of ikpx2.

Results

One of the early new ikpx2 search results using Paul Tooke’s idea was this partial, which evolves into a period-528 puffer engine:

Similarly, here is a novel period-21 spaceship found by John Winston Garth. He ran ikpx2 to look for 2c/7 spaceships; within 3 days of running on 12 CPU threads, it had both rediscovered David Eppstein’s weekender and found a wholly new period-21 attachment that clings to its tail:

The user ‘iNoMed’ then discovered that two of these could interact to produce an exhaust that can be perturbed by a nearby weekender to emit a forward-moving glider every 42 generations:

Another feature of ikpx2 is its ability to find objects which have different components of different symmetries. Here, for example, is a spaceship with a symmetric head, asymmetric thorax, and symmetric abdomen. The tail is high-period, again having arisen from simulating a partial rather than from direct SAT solving:

Somewhat disappointingly, ikpx2 has not succeeded in finding any new spaceship velocities in the standard Life rules. I tested it on the same input as the original ikpx used to find Sir Robin; ikpx2 was able to find the same result approximately 50x faster (in terms of core-hours elapsed).

There was a recent near-miss, which would be a (2,1)c/7 spaceship were it not for a single extra cell born in generation 7 (the left configuration, with or without the indicated green triplet, gives rise to the right configuration with the frustrating red tetraplet):

The existence of near-misses such as this one makes me hopeful that it will eventually find a (2,1)c/7 spaceship given more time.

Other cellular automata

Unlike the original version of ikpx, this version is able to search a wide family of related cellular automata. In particular, it extracts the set of prime implicants from the cellular automaton rule (regarded as a 9-input 1-output boolean function) and uses that to encode the mechanics of the rule into the SAT problems.

In particular, two invocations of ikpx2 were sufficient to find the following growing knightship in the rule Day&Night: one invocation to find the fast (2,1)c/5 frontend which stretches a thick oblique line, and another invocation to find a slower (2,1)c/6 tail which consumes it:

Here’s an 11c/22 spaceship in a nonstandard rule found by lemon41625, which is an even better demonstration of the symmetry switching. The discovery consists of odd-symmetric, asymmetric, and even-symmetric components with a high-period exhaust:

Source code

The source code for ikpx2 is open-source (released under an MIT licence) so you can experiment with it yourself. It’s currently x86_64-specific, because it has apgsearch as a dependency (in order to quickly simulate the partials and examine whether they evolve into anything remotely interesting).

Posted in Uncategorized | 2 Comments