Hamming backups: a 2-of-3 variant of SeedXOR

SeedXOR is an approach for splitting a Bitcoin wallet seed, adhering to the BIP39 standard, into N ‘parts’ (each the same size as the original) which are each a valid BIP39 seed. It is used by the COLDCARD wallet and preferred over more complicated Shamir secret sharing schemes on the basis that:

  • the SeedXOR calculations are simple to do, such that they can be performed with pencil and paper without the aid of a computer;
  • each of the N parts of a SeedXOR backup is itself a valid BIP39 seed, so they can contain ‘decoy wallets’ and thereby steganographically hide the fact that they’re even part of a SeedXOR backup at all!

On the other hand, Shamir secret sharing enables ‘M-of-N’ backups, where only M parts are necessary to reconstruct the original seed (instead of all N parts).

Here we present a ‘2-of-3’ variant of SeedXOR which retains many of the desirable properties of the original SeedXOR:

  • the seed X is used to construct three ‘parts’ (each the same size as the original seed, as in SeedXOR), called A, B, and C;
  • as in SeedXOR, each of these three parts is indistinguishable from a standard BIP39 seed (giving rise to plausible deniability);
  • this is backwards-compatible with SeedXOR, in that if you have all three parts, then you can reconstruct X by simply taking A ⊕ B ⊕ C (and recomputing the checksum);

However, unlike a regular SeedXOR backup, this modified variant allows for full recovery of the original seed even when one of the three parts is missing:

  • given any two of the three parts, the seed X can be recovered.

This new variant is called a ‘Hamming backup‘ because the construction is based on an extended Hamming code. Generating a Hamming backup and recovering a seed from a Hamming backup can be done using only the XOR operation, as with SeedXOR. It’s slightly more complicated than SeedXOR in its construction (which is inevitable, because this implements a 2-of-3 scheme instead of an N-of-N scheme), but still possible to do with pencil and paper using only the XOR operation.

In particular, we firstly regard the 256-bit seed X as the union of two 128-bit values, X1 and X2, and similarly regard the parts A, B, and C as the unions A1+A2, B1+B2, C1+C2. We’ll explain later how to split a BIP39 seed in this manner.

The linear (bitwise XOR) relations that we impose between the different 128-bit values are summarised in the following diagram. (The analogous diagram for ordinary SeedXOR would be a single circle containing N+1 values, namely the seed X together with the N parts A, B, …)

This diagram has three circles, each of which has 4 values inside (in the case of the top-right circle, these are A1, A2, X1, C2) and 4 values outside (in this case, B1, B2, C1, X2). If four values are either all inside the same circle or all outside the same circle, then we impose the relation that they all must bitwise-XOR to zero. For example:

  • A1 ⊕ A2 ⊕ X1 ⊕ C2 = 0
  • B1 ⊕ B2 ⊕ C1 ⊕ X2 = 0

and analogously for the other two circles:

  • B1 ⊕ B2 ⊕ X1 ⊕ A2 = 0
  • C1 ⊕ C2 ⊕ A1 ⊕ X2 = 0
  • C1 ⊕ C2 ⊕ X1 ⊕ B2 = 0
  • A1 ⊕ A2 ⊕ B1 ⊕ X2 = 0

Equivalently, for each of these quartets, any one of the four values is given by the bitwise XOR of the other three.

Further linear relations can be derived from combining existing relations; for example, X1 = A1 ⊕ B1 ⊕ C1 and X2 = A2 ⊕ B2 ⊕ C2. Indeed, there is a group-like closure property: if you take any 3 of the 8 values in the diagram and bitwise-XOR them together, the result of the calculation is one of the 8 original values.

We’ll now discuss how these relations enable the seed X to be recovered from any two of the three parts {A,B,C} of a Hamming backup. Then we’ll discuss how to actually produce a Hamming backup {A,B,C} from an existing seed X together with a source of secure randomness (specifically, by generating A randomly and then deterministically deriving B and C from A and X).

Recovering a seed from a Hamming backup

The relations have the property that, given parts A and B, we can recover the seed X as follows:

  • X1 = A2 ⊕ B1 ⊕ B2 (they’re all inside the left circle)
  • X2 = A1 ⊕ A2 ⊕ B1 (they’re all outside the lower-right circle)

This can be done in three applications of XOR, rather than four, by precomputing the common subexpression A2 ⊕ B1. As such, this is only 50% slower than recovering a seed from a 2-of-2 SeedXOR backup, rather than twice as slow.

Moreover, the whole diagram has order-3 cyclic symmetry, so you can do the same thing with parts B and C, or with parts C and A. That is to say, you can recover the original seed with any two of the three parts.

Important note: when using two parts to generate the seed X, it is important that the two parts are used in the correct order (swapping the roles of A and B, for instance, would result in us generating the remaining part C instead of the desired secret X). If we made this mistake (and recovered C instead of X), then X can be derived by straightforwardly XORing together A, B, and C.

Constructing a Hamming backup

How does one construct a Hamming backup of an existing seed X? The first part, A, should be securely uniformly randomly generated. This will ensure that no information about X is leaked to an attacker who possesses only one of the parts {A, B, C}. (Like SeedXOR, one-time pads, and Shamir secret sharing, this is an information-theoretically secure construction, rather than a construction that depends upon the computational intractability of some problem.)

The remaining parts of the backup can be derived by moving anticlockwise around the diagram, obtaining each value from the XORing the previous two (in cyclic order around the diagram) together with one of the values from the seed X. Specifically, we compute:

  • B1 = A1 ⊕ A2 ⊕ X2;
  • B2 = A2 ⊕ B1 ⊕ X1;
  • C1 = B1 ⊕ B2 ⊕ X2;
  • C2 = B2 ⊕ C1 ⊕ X1;
  • A1 = C1 ⊕ C2 ⊕ X2;
  • A2 = C2 ⊕ A1 ⊕ X1;

The last two lines here are redundant — they compute the values A1 and A2 with which we already started — but are useful as a validity check in case you made errors whilst computing B1, B2, C1, and C2. An independent check is to load {A,B,C} into a COLDCARD hardware wallet, pretending that it’s a regular 3-of-3 SeedXOR. If the Hamming backup has been created correctly, this should load the original seed X = A ⊕ B ⊕ C.

BIP39-friendly way to split X into X1 and X2

If we only cared about the machine implementation of Hamming backups, then it would be straightforward to split the 256-bit value X (i.e. the entropy part of the seed, with no checksum) into an initial 128-bit segment and a final 128-bit segment.

However, part of the reason for favouring SeedXOR and Hamming backups over more complicated Shamir schemes is the ability to perform the calculations relatively quickly using pencil and paper without the aid of a computer.

The BIP39 standard appends an 8-bit checksum to the 256-bit seed (to create a 264-bit value) and then splits it into 24 words of 11 bits:

  • word 1: [11 seed bits]
  • word 2: [11 seed bits]
  • word 3: [11 seed bits]
  • […]
  • word 23: [11 seed bits]
  • word 24: [3 seed bits + 8 checksum bits]

These are each represented either as English words (from a 2048-element wordlist) or as 3-digit hexadecimal numbers where the first digit encodes 3 bits and the other two digits each encode 4 bits, making up the total of 11.

The hexadecimal representation is used when computing SeedXOR manually; a 16×16 lookup table is provided for applying the bitwise XOR operation to hexadecimal digits.

So that Hamming backups can be conveniently computed using the same mechanisms used by ordinary SeedXOR, we propose splitting the 24-word seed X into the 128-bit components X1, X2 as follows:

  • X1 contains words 1–11 followed by the first two hex digits of word 12;
  • X2 contains words 13–23 followed by the first hex digit of word 24 and the third hex digit of word 12.

On paper, we would write the hexadecimal digits of X1 directly above those of X2, like so:

Observe that only the first hex digit of word 24 is included (since the other two are checksum digits). Also, the first two digits of word 12 contribute to X1, whereas the third digit contributes to X2.

Crucially, the hex digits in the seed have been arranged into the two rows in such a way that each 4-bit digit in the X1 row appears above a 4-bit digit in the X2 row, and each 3-bit digit in the X1 row appears above a 3-bit digit in the X2 row. This means that performing bitwise XOR operations between different rows doesn’t ever yield an invalid value (such as causing the first hexadecimal digit in a word to be 9, which is outside the valid range 0..7).

Relationship with Shamir secret sharing

Hamming backups are isomorphic to a special case of Shamir secret sharing, where the degree of the polynomial is 1 (so it’s a 2-of-N scheme) and the construction operates over the field GF(2^2) of four elements. The three parts {A,B,C} encode the value of the polynomial evaluated at the three nonzero elements of the field, {1,ω,ω+1}, and the seed X encodes the value of the polynomial evaluated at 0.

Since we’re only dealing with a degree-1 polynomial, this construction is linear (which is why we were able to implement it exclusively using XOR). Shamir schemes based on higher-degree polynomials are not amenable to this technique, so do not give rise to arbitrary M-of-N analogues of Hamming backups.

The fact that Hamming backups only offer a 2-of-3 scheme is a limitation compared with the arbitrary M-of-N schemes possible with general Shamir secret sharing. It may be possible to implement M-of-N schemes using more sophisticated binary error-correcting codes, but doing so entails further sacrificing the simplicity of the original SeedXOR scheme.

Posted in Uncategorized | 1 Comment

An efficient prime for number-theoretic transforms

My new favourite prime is 18446744069414584321.

It is given by p = \Phi_6(x) = x^2 - x + 1, where x = 2^{32}. This means that, in the finite field \mathbb{F}_p, 2^32 functions as a primitive 6th root of unity, and therefore 2 is a primitive 192nd root of unity. It turns out that this field possesses multiple properties which make it well-suited to performing number-theoretic transforms (the finite field analogue of the Fast Fourier Transform).

Arithmetic

Firstly, note that arithmetic is especially convenient in this field. An element of the field can be represented as an unsigned 64-bit integer, since p is slightly less than 2^64. We can also efficiently reduce modulo p without involving multiplication or division. In particular, if we have a non-negative integer n less than 2^159, then we can write it in the form:

n = A 2^{96} + B 2^{64} + C

where A is a 63-bit integer, B is a 32-bit integer, and C is a 64-bit integer. Since 2^96 is congruent to −1 modulo p, we can then rewrite this as:

B 2^{64} + (C - A)

If A happened to be larger than C, and therefore the result of the subtraction underflowed, then we can correct for this by adding p to the result. Now we have a 96-bit integer, and wish to reduce it further to a 64-bit integer less than p. To do this, we note that 2^64 is congruent to 2^32 − 1, so we can multiply B by 2^32 using a binary shift and a subtraction, and then add it to the result. We might encounter an overflow, but we can correct for that by subtracting p.

In C/C++, the algorithm is as follows:

This involves no multiplication or division instructions. Indeed, we can take a look at the compiled assembly code by using the Godbolt Compiler Explorer:

Observe that, using LLVM as the compiler, the resulting code is branchless (despite the word ‘if’ appearing twice in the source code) and all of the instructions are particularly cheap.

Now, why would we end up with a 159-bit integer in the first place and want to reduce it? There are two occasions where this subroutine is useful:

  • To perform a multiplication modulo p, we use machine instructions to multiply the two 64-bit operands to give a 128-bit result.
  • Left-shifting a 64-bit integer by up to 95 bits gives a 159-bit result.

The latter allows us to cheaply multiply an element of our field by any power of 2. In particular, if we want to multiply by 2^(96a + b), then do the following:

  • if a is odd, then subtract the input from p (in-place), relying on the fact that 2^96 is congruent to −1;
  • shift-left by b bits (to give a 159-bit result) and reduce using the subroutine.

That is to say, multiplying by any 192nd root of unity is ‘fast’, bypassing the need to perform a general multiplication. (This is particularly useful for GPUs, where there isn’t a 64-bit multiplier in hardware, so multiplication expands to many invocations of the hardware multiplier.)

Other roots of unity

As well as possessing these 192nd roots of unity, the field also contains roots of unity of any order dividing p − 1. This factorises as follows:

p - 1 = 2^{32} \times 3 \times 5 \times 17 \times 257 \times 65537

where those odd prime factors are the five known Fermat primes. We can perform a Fourier transform of any length that divides p − 1, since all of the requisite roots of unity exist, but this will be especially efficient if we only involve the smaller prime factors.

Most of these other roots of unity are ‘slow’, meaning that there is no obvious way to multiply by them without performing a general multiplication. There is, however, a nice observation by Schönhage: \sqrt{2} = \zeta + \zeta^7, where ζ is a primitive 8th root of unity, so we can efficiently multiply by the square-root of 2 (which is, of course, a primitive 384th root of unity).

A length-N FFT involves the Nth roots of unity, so we would want to use FFTs of length dividing 192 (or 384 using the Schönhage trick) whenever possible. This suggests the following algorithm for convolving two sequences:

  • choose an FFT length N that divides 3 \times 2^{32} and is large enough to accommodate the result of the convolution;
  • write N as a product of (at most 5) numbers dividing 384. For example, in the largest case we could write N = 128 × 128 × 128 × 96 × 64;
  • use this decomposition as the basis of a mixed-radix Cooley-Tukey FFT. That is to say, we apply:
    • N/128 FFTs of length 128;
    • N/128 FFTs of length 128;
    • N/128 FFTs of length 128;
    • N/96 FFTs of length 96;
    • N/64 FFTs of length 64.

Since there are at most 5 rounds of FFTs, we only need to apply arbitrary twiddle-factors (which can be precomputed and stored in a table) 4 times, i.e. between successive rounds of FFTs. Compare this to an FFT over the complex numbers, where there are only 4 ‘fast’ roots of unity (±1 and ±i) and therefore irrational twiddle factors must be applied much more often.

Fürer’s algorithm

Martin Fürer’s fast integer multiplication algorithm uses this idea recursively, expressing the integer multiplication problem as a convolution over a suitably chosen ring with plenty of fast roots of unity. In particular, he writes:

We will achieve the desired speed-up by working over a ring with many “easy” powers of ω. Hence, the new faster integer multiplication algorithm is based on two key ideas:

  • An FFT version is used with the property that most occurring roots of unity are of low order.

  • The computation is done over a ring where multiplications with many low order roots of unity are very simple and can be implemented as a kind of cyclic shifts. At the same time, this ring also contains high order roots of unity.

The field \mathbb{F}_p for p = 2^{64} - 2^{32} + 1 is a fixed-size ring with these desirable properties. It supports multiplication of integers up to 3 \times 2^{35} bits (12 GB) by writing them in ‘balanced base 2^16’ (where each ‘digit’ can be between −32768 and 32767). Each of the two integers will occupy at most 3 \times 2^{31} digits, so the result can be computed by a convolution of length 3 \times 2^{32}. The maximum absolute value of a coefficient that could occur as a result of such a convolution is 3 \times 2^{61} < \frac{1}{2} p, so the coefficient can be recovered exactly if the convolution is performed over \mathbb{F}_p by reducing each coefficient into the range [−p/2, +p/2].

Of course, Fürer’s paper was interested in the asymptotics of integer multiplication, so it needs to work for arbitrarily large integers (not just integers of size at most 12 GB). The remaining idea was therefore to apply this idea recursively: to split a length-N integer into chunks of size O(log N), and use Fürer’s algorithm itself to handle those chunks (by splitting into chunks of size O(log log N), and so forth, until reaching a ‘base case’ where the integers are sufficiently small that FFT-based methods are unhelpful).

The number of layers of recursion is therefore the number of times you need to iteratively take the logarithm of N before log log log … log log N < K, where K is the base-case cutoff point. This is called the iterated logarithm, and denoted \log^{\star} N.

Each layer of recursion in Fürer’s algorithm costs a constant factor F more than the previous layer, so the overall complexity is O(N \log N F^{\log^{\star} N}). It was the asymptotically fastest multiplication algorithm between its invention in 2007 (taking the crown from Schönhage-Strassen) and 2016 (when an O(N log N) algorithm was discovered).

Practical integer multiplication

In practice, large integer multiplication tends to use either Schönhage-Strassen (in the case of the GNU Multiprecision Library), or floating-point FFTs, or a bunch of number-theoretic transforms over different primes with a final application of the Chinese Remainder Theorem (in the case of Alexander Yee’s y-cruncher). The primes used by these number-theoretic transforms don’t support any ‘fast’ roots of unity, though; all of the twiddle factors are applied using multiplications.

This suggests that a number-theoretic transform over the field of 18446744069414584321 elements may indeed be competitive for the sizes of integers it supports (e.g. up to 12 GB). For larger integers, we can use the Schönhage-Strassen algorithm with this prime field as a base case (one layer of Schönhage-Strassen being more than enough to support multiplication of integers that can be stored in practice).

Posted in Uncategorized | Leave a comment

Hamming cube of primes

Given two nonnegative integers, m and n, we say that they are Hamming-adjacent if and only if their binary expansions differ in exactly one digit. For example, the numbers 42 and 58 are Hamming-adjacent because their binary expansions 101010 and 111010 differ in a single position.

If m and n are Hamming-adjacent, then their absolute difference |n − m| is necessarily a power of 2. The converse is not true, though; 24 and 32 have a difference of 8, but their binary expansions 011000 and 100000 differ in three positions.

We can form a countably infinite graph G on the set of all nonnegative integers by connecting two vertices if and only if they’re Hamming-adjacent. G is a bipartite graph: two integers can only be Hamming-adjacent if one is odious and the other is evil.

G is the union of nested hypercube graphs: the first 2^d nonnegative integers form a hypercube graph of degree d. For example, here’s the subgraph induced by the first 32 naturals:

What about if we take the induced subgraph on only the vertices which are primes? For the primes below 32, we get the following graph:

It’s a connected graph! It remains connected if we repeat for the primes below 64:

What about for the primes below 128? Now it has an isolated vertex, 127, and an isolated edge:

When we go further and look at the primes below 512, the vertex 127 is no longer isolated; it has been connected into the main component via the prime 383:

Similarly, the edge between 89 and 73 becomes assimilated into the main component once we increase our horizon to the primes below 1024.

This raises the question: does every prime eventually become connected to the main component? Or, equivalently, when we form the countably infinite induced subgraph H of G whose vertices are all of the primes, is H connected?

Isolated vertices

It turns out that the answer is no: the prime 2131099 is an isolated vertex in H with no neighbours whatsoever! (It is plausibly the smallest such example.)

How does one even prove that 2131099 is an isolated vertex? In other words, how do we show that all of the Hamming-adjacent integers are composite? Firstly, note that the Hamming-adjacent integers smaller than itself are a finite set obtained by subtracting one of the powers of 2 in its binary expansion:

33947, 2098331, 2130075, 2130971, 2131083, 2131091, 2131097, 2131098

These can all be verified to be composite. But what about the infinitely many Hamming-adjacent integers that are larger than itself? These are necessarily of the form $2131099 + 2^k$ for some value k. It transpires that every element in this set must be divisible by at least one of the primes {3, 5, 7, 13, 17, 241}, called the covering set. In particular, we have:

  • k = 1 (mod 2) implies divisible by 3;
  • k = 0 (mod 4) implies divisible by 5;
  • k = 1 (mod 3) implies divisible by 7;
  • k = 2 (mod 12) implies divisible by 13;
  • k = 2 (mod 8) implies divisible by 17;
  • k = 6 (mod 24) implies divisible by 241;

and together these cover all residue classes modulo 24.

We can go further and show that there are infinitely many such isolated vertices. In particular, every number of the following form:

n = 1885107369798300628981297352055 h + 3316923598096294713661

has a covering set of primes for all numbers of the form n \pm 2^k, as discovered by Christophe Clavier. Specifically, there are two covering sets of primes: one for the ‘+’ case and one for the ‘−’ case; the union of these two sets is just the set:

{3,5,7,11,13,17,19,31,37,41,73,97,109,151,241,331,673,1321}

of prime divisors of the linear coefficient.

So, we just need to show that there are infinitely many primes of this form that are not Hamming-adjacent to any of the primes in the above set. The latter condition is easy to enforce — for example, by insisting that n is congruent to 4095 mod 4096 (i.e. its binary expansion ends in twelve ‘1’s). Then we are left with an arithmetic progression whose offset and common difference are coprime, and a theorem of Dirichlet states that there are infinitely many primes in such an arithmetic progression.

A difficult conjecture

Given the existence of infinitely many isolated vertices, we postulate the following conjecture:

Conjecture: Other than the infinitely many isolated vertices, there is exactly one connected component in H.

What would a counterexample look like? The smallest such counterexample would be a pair of Hamming-adjacent primes with p > q and no further neighbours. The usual method of covering sets would only work under the following set of conditions:

  1. p has a covering set P preventing any larger Hamming-adjacent primes;
  2. q has a covering set Q preventing any larger Hamming-adjacent primes, with the single exception of p, and this is only possible if p is itself a member of Q;
  3. The finitely many numbers smaller than and Hamming-adjacent to p all happen to be composite, with the single exception of q;
  4. The finitely many numbers smaller than and Hamming-adjacent to q all happen to be composite.

The second of these conditions is incredibly implausible, because the primes appearing in a covering set are usually much smaller than the dual Sierpinski numbers that arise from such a covering set. Here we are asking for the prime in the covering set to be larger!

Note that this conjecture is definitely unsolved, because it’s stronger than the conjecture that there are primes of every Hamming weight.

Posted in Uncategorized | 2 Comments

Determinacy

I’d like to take this opportunity to highly recommend Oscar Cunningham’s blog. One of the posts, entitled A Better Representation of Real Numbers, describes an elegant order-preserving* bijection between the nonnegative reals [0, ∞) and ‘Baire space‘, \mathbb{N}^{\mathbb{N}}, the space of infinite sequences of nonnegative integers.

*where the order on the Baire space is defined by lexicographical order.

Ultimately, it is built up recursively from just two ideas:

  1. There is an order-preserving homeomorphism between [0, 1) and [0, ∞), namely xx/(1 − x).
  2. [0, ∞) can be partitioned into countably many copies of [0, 1), stacked end-to-end, one for each natural number.

Baire space and [0, ∞) both admit natural topologies (the product topology and the usual topology, respectively) and this order-preserving bijection is continuous in one direction: from Baire space to the nonnegative reals.

If we allow the first element of the integer sequence to be negative, then this gives an order-preserving bijection between the entirety of \mathbb{R} and the space \mathbb{Z} \times \mathbb{N}^{\mathbb{N}}. Rather similarly to the better known continued fraction representation, rationals are mapped to eventually-zero sequences and quadratic irrationals to eventually-periodic sequences.

Gale-Stewart games

Given a subset X of Baire space, there is an associated two-player game (the Gale-Stewart game G(X)) where players alternately exclaim nonnegative integers:

  • Player A plays a nonnegative integer z0;
  • Player B plays a nonnegative integer z1;
  • Player A plays a nonnegative integer z2;
  • Player B plays a nonnegative integer z3;

and so on, ad infinitum. Player A wins if the sequence (z0, z1, z2, z3, …) belongs to the set X; Player B wins otherwise.

A game G(X) and the corresponding set X are described as determined if one of the players has a winning strategy. To be precise, a strategy is a function from finite initial segments of play to the nonnegative integers, specifying which move one makes after seeing a particular sequence of moves.

It is somewhat unintuitive that non-determined games can exist: after all, it’s impossible for a game to end in a draw, so one may naively conclude that ‘under perfect play’ the game will be won by A or by B. However, ‘perfect play’ is not well defined for these infinite games, and it’s possible to have an undetermined game with the following properties:

  • for every strategy of player A, there exists a player B strategy that beats it;
  • for every strategy of player B, there exists a player A strategy that beats it;

and these are perfectly consistent statements. In ZFC, it’s possible to build an undetermined game using transfinite induction. The idea is to iterate over the tuples of the form (player, strategy, initial_segment_of_play) and to ‘foil’ that strategy by fixing the game outcome to be a loss for that player if that player follows the strategy and the other player plays some other strategy. This is possible because the other player can force the game to hit any of |\mathbb{R}| different outcomes, and thus far in the transfinite induction we’ve fixed strictly fewer than that many points in the Baire space, so there’s always a game outcome that we haven’t yet fixed and can freely set to be a win for the other player. (At the end of this transfinite induction, we likely only have a partially-specified game, but that’s fine; we just set the remaining outcomes arbitrarily, e.g. to be wins for Player B.)

Surprisingly, it’s consistent with ZF (sans choice) that every Gale-Stewart game is determined. This is called the axiom of determinacy. It’s false in ZFC, for the reasons stated above, but weaker versions of this axiom can be proved in ZFC. For instance:

Every open subset of Baire space is determined

and, more generally:

Every Borel subset of Baire space is determined

Consequences of large cardinals

Assuming certain large cardinal axioms, it is possible to show that various generalisations of Borel sets are also determined.

One example of a large cardinal axiom is the existence of a measurable cardinal, an uncountable cardinal S such that there exists a two-valued measure \mu : \mathbb{P}(S) \rightarrow \{0, 1\} satisfying:

  • μ(A) = 0 if and only if μ(S \ A) = 1;
  • if μ(A) = 0 and B is a subset of A, then μ(B) = 0;
  • singletons are small, i.e. μ({p}) = 0 for all points p in S.
  • the union of a collection of strictly fewer than |S| sets of measure 0 again has measure 0.

Without the ‘uncountable’ constraint, \aleph_0 would be measurable because this is equivalent to the definition of a non-principal ultrafilter.

If such a measurable cardinal exists, then it follows that every analytic set (continuous image of a Borel set) is determined, as is every coanalytic set (complement of an analytic set). A proof of this is here; I first encountered it in a 2015 graduate course on Descriptive Set Theory by Adrian Mathias.

Analytic and coanalytic sets together form the 1st level of the projective hierarchy; the 2nd level consists of projections of coanalytic sets and complements thereof; and so forth. (The Borel sets can be viewed as the 0th level of the projective hierarchy.)

If there are n Woodin cardinals with a measurable cardinal above them, then every set in the (n+1)th level of the projective hierarchy is determined. If there are infinitely many Woodin cardinals, then this holds for every n, and therefore every projective set is determined. This was proved in a paper by Martin and Steel.

Finally, if there’s a measurable cardinal above this infinite sequence of Woodin cardinals, then L(\mathbb{R}) (the constructible closure of the reals) satisfies the axiom of determinacy.

Consequences of determinacy

Determinacy has been formulated in terms of infinite games, but it is also of interest in real analysis because it implies various regularity properties about sets of reals. In particular, if a pointclass (family of sets) \mathcal{A} is closed under continuous preimages, such as those we have just considered, then every set in \mathcal{A} being determined also implies that they:

For example, if the axiom of determinacy holds in L(\mathbb{R}), then all three of these regularity properties hold for every set in \mathbb{P}(\mathbb{R}) \cap L(\mathbb{R}).

In Strong Axioms of Infinity and the Search for V (definitely worth reading), W. Hugh Woodin makes a strong case in favour of projective determinacy: projective sets are precisely the relations that can be formulated in second-order arithmetic, and assuming projective determinacy allows one to settle most natural questions in second-order arithmetic.

Posted in Uncategorized | Leave a comment

One-way permutations

One-way permutations are fascinating functions that possess a paradoxical pair of properties. They are efficiently computable functions φ from [n] := {0, 1, …, n−1} to itself that are:

  • Mathematically invertible, meaning that φ is a bijection;
  • Cryptographically uninvertible, meaning that it’s computationally infeasible in general to compute x given φ(x).

We’ll discuss a couple of constructions of one-way permutations. The first construction is simpler to understand, but is less secure (for a given size of n) or less efficient (for a given security level).

Construction I: using modular arithmetic

Let p be a large Sophie Germain prime; that is to say, a prime such that q = 2p + 1 is also prime. Then the field \mathbb{F}_q has a cyclic multiplicative group of 2p elements, and we can find a generator g for this multiplicative group.

Letting n = q − 1 = 2p, we can define a one-way permutation from [n] to itself as follows:

φ(x) := g^x − 1

The function g^x is a bijection from [n] to the set of nonzero elements modulo q, and the offset of − 1 maps this back to the interval [n]. It is easy to compute in the forward direction, by repeated squaring, but is intractible to invert when n is sufficiently large. The problem of inverting modular exponentiation is the discrete logarithm problem, and the largest safe prime for which this problem has been broken is 768 bits long.

To achieve a security level of 128 bits, NIST recommends using a 3072-bit value of n. The reason that this is so large is because the discrete logarithm problem is susceptible to index calculus attacks.

Attempting to use elliptic curves

Recall that an elliptic curve is a nonsingular cubic algebraic curve. By using an appropriate change of variables, it can be written in the Weierstrass normal form as the equation y² = x³ + ax + b, where the nonsingularity condition is equivalent to the constraint that the discriminant 4a³ + 27b² is nonzero.

Elliptic curves admit a group operation. Specifically, to add two points A and B, do the following:

  • draw a line through A and B, and let it intersect the curve again at C;
  • draw a line through C and the point at infinity, and let it intersect the curve again at another point (A+B).

This operation is manifestly commutative, has an identity (the point at infinity), and admits inverses. The operation is also associative; as mentioned before, this can be proved using Cayley-Bacharach.

In cryptography, it’s customary to work with elliptic curves over finite fields such as \mathbb{F}_p. A theorem of Hasse states that the number |E| of points on the elliptic curve E is approximately equal to p, differing by O(sqrt(p)). Particularly useful is when the group of points on the curve is cyclic (a sufficient condition is for |E| to be squarefree), as then we can find a generator point G and formulate an analogue of the discrete logarithm problem.

For a given group size, the elliptic curve discrete logarithm problem is more difficult than the ordinary discrete logarithm problem, since for general elliptic curves there are no known algorithms faster than the Pollard rho algorithm. Note that some curves are much less secure than others; Daniel J. Bernstein has a website recommending how to choose a good curve.

Instead of the group size of 2^3072 recommended for NIST to provide 128-bit security for the ordinary discrete logarithm problem, it suffices to have an elliptic curve with group size approximately 2^256. The security of the digital signatures in the Bitcoin network are based on the intractability of solving the discrete logarithm problem on a particular elliptic curve (called secp256k1) of approximately this size.

Unfortunately, there is no easy way to bijectively map the points of this elliptic curve to an interval of the correct size. It was possible with the previous group, Construction I, thanks to the fact that the multiplicative subgroup of \mathbb{F}_q is exactly the interval {1, 2, 3, …, q − 1}, and that can be mapped to the standard interval {0, 1, 2, …, q − 2} by subtracting 1. But the same thing cannot be done for elliptic curves because the usual way of representing points (x, y) as the pair of the x-coordinate and the sign of y is not bijective: roughly half of the numbers in {0, 1, …, p − 1, ∞} cannot occur as valid values of x because the corresponding values of x³ + ax + b are quadratic nonresidues (so cannot equal y²).

There is a class of elliptic curves for which we can efficiently biject the points with an interval, namely supersingular elliptic curves, but unfortunately this just ends up as a disguised version of Construction I: the elliptic curve discrete logarithm problem for these curves is reducible to discrete logarithm modulo p.

Construction II: twists to the rescue!

If d is a quadratic nonresidue, then the following pair of elliptic curves:

  • E: y² = x³ + ax + b;
  • E‘: y² = d(x³ + ax + b);

are known as quadratic twists of each other. The values of x which lie on E‘ precisely fill in the ‘holes’ formed by the values of x which don’t lie on E, because d(x³ + ax + b) is a quadratic residue precisely when x³ + ax + b is not! (Some extra care is needed to handle the points where y² is 0 or ∞.)

Specifically, we have:

|E| + |E‘| = 2p + 2

and, importantly, we have an explicit bijection between [2p + 2] and the union of the two elliptic curves. As such, if E and E‘ are both cyclic groups and have intractable discrete logarithm problems, then we can manufacture a one-way permutation on [2p + 2] as I describe in this StackExchange answer. This construction appears to have been discovered in 1991 by Burton S. Kaliski Jr.

Avoiding small cycles

Let’s suppose we want to use this one-way bijection inside a pseudorandom number generator, similar to Blum-Blum-Shub but without the existence of a trapdoor (the factorisation of M = pq). It is entirely plausible that φ contains fixed points or short cycles, which would result in periodic output from the pseudorandom number generator.

In particular, if we apply Construction II to a pair of curves with p = 2^255 − 19, we obtain a one-way permutation on [2^256 − 36]. For convenience, we can pad this to a permutation on [2^256] by extending it with the identity function on the remaining 36 points. This permutation has at least 36 fixed points, and plausibly a few more than that.

We’ll extend this to a one-way permutation on [2^384] as follows:

Split the input into a 256-bit ‘upper part’ and a 128-bit ‘lower part’. Apply the one-way permutation to the upper part, and apply a period-2^128 permutation (such as an appropriate affine function modulo 2^128) to the lower part. Then XOR the lower part into the upper part, in-place, to ‘perturb’ it.

By looking at the lower part in isolation, we know that this permutation does not have any cycles of length shorter than 2^128. Moreover, the interaction from the lower part to the upper part is designed to ensure that every orbit will benefit from the hardness of the elliptic curve discrete logarithm problem; without it, the 36 trivial fixed points of the permutation on [2^256] would each give rise to a weak random number generator (a 128-bit LCG).

To use this as an actual cryptographic random number generator, you’d need to choose a suitable output function — for instance, taking a RIPEMD-160 hash* of the 384-bit internal state — and then be careful** to implement the whole thing using constant-time instructions to avoid timing attacks. (On most CPUs, for example, the multiplication instruction lacks this property.)

*practical cryptographic random number generators such as ChaCha20 use less computationally intensive output functions, but in this case we don’t care because it still pales in comparison to the amount of computation in the state update function.

**harder than it seems, because the compiler may optimise your constant-time code into faster variable-time code. Unless you absolutely need the property that its previous internal states cannot be deduced from its current internal state, then I’d recommend using a well-tested implementation of ChaCha20 instead of this 384-bit PRNG.

Posted in Uncategorized | Leave a comment

Cyclotomic fields

The nth cyclotomic field is the field \mathbb{Q}[\zeta] generated by a primitive nth root of unity, ζ. It is an example of a number field, consisting of algebraic numbers, and its dimension is φ(n) when regarded as a vector space over \mathbb{Q}. If n is odd, then the nth cyclotomic field is identical to the 2nth cyclotomic field, so we’ll henceforth assume without loss of generality that n is even.

The intersection of \mathbb{Q}[\zeta] with the ring of algebraic integers is exactly the ring \mathbb{Z}[\zeta] generated by ζ. The first few examples of rings of this form are:

  • n = 2: \zeta = -1, so \mathbb{Z}[\zeta] is just the ring of ordinary integers \mathbb{Z}.
  • n = 4: \zeta = i, so \mathbb{Z}[\zeta] is the ring of Gaussian integers (red, below-left).
  • n = 6: \mathbb{Z}[\zeta] is the ring of Eisenstein integers (blue, below-right).

These three rings admit unique factorisation, meaning that every nonzero element can be uniquely expressed as a product of primes, up to reordering and unit migration. This turns out to be true for precisely the following thirty values of n:

2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30,
32, 34, 36, 38, 40, 42, 44, 48, 50, 54, 60, 66, 70, 84, 90

The corresponding values of φ(n) are:

1, 2, 2, 4, 4, 4, 6, 8, 6, 8, 10, 8, 12, 12, 8,
16, 16, 12, 18, 16, 12, 20, 16, 20, 18, 16, 20, 24, 24, 24

Note that, in particular, the maximum dimension is φ(n) = 24, attained only by the three largest values of n (70, 84, and 90). These rings, viewed as additive groups, are therefore free Abelian groups of rank 24.

And this is where it gets exciting: David Madore noticed that the existence of order-70 and order-84 elements in the Conway group Co0 give rise to highly rotationally symmetric projections of the Leech lattice into the complex plane. Here’s the order-84 projection applied to the first shell:

Projection of the first shell of the Leech lattice into the plane with order-84 rotational symmetry, by David Madore

And here’s the order-70 counterpart:

Projection of the first shell of the Leech lattice into the plane with order-70 rotational symmetry, by David Madore

By suitably scaling and rotating either of these projections, we can arrange for the image to contain the respective nth roots of unity. If we view the resulting projection as a map from \mathbb{Q}^{24} to \mathbb{Q}[\zeta], we can see that it is surjective (because its image contains the roots of unity, and these span the field) and therefore bijective (because the dimensions match).

In particular, that means that the projection remains injective when restricted to the Leech lattice, and its image must be a rank-24 Abelian subgroup of \mathbb{Q}[\zeta] which is closed under multiplication by roots of unity. Given that it has a finite integral basis, we can further scale/rotate the projection such that the images are all algebraic integers and therefore form an ideal in \mathbb{Z}[\zeta].

Thanks to these rings having class number 1, every ideal is principal, so this ideal must simply be a scaled/rotated copy of \mathbb{Z}[\zeta] itself! That is to say, when n is 70 or 84, we can identify the points in the Leech lattice with the algebraic integer elements of the cyclotomic field!

Working in the opposite direction, we can construct the Leech lattice starting from one of these cyclotomic fields by defining an inner product on its algebraic integer elements:

\langle u, v \rangle = tr(\alpha u v^{\star})

where α is a particular element of the ring (different choices yield different lattices), and tr is the trace operation (sum of all Galois conjugates).

There are similar constructions of the Leech lattice for some other values of n, where the class number is greater than 1 (and therefore unique factorisation does not apply), but the constructions could potentially require using a proper ideal rather than the whole ring of algebraic integer elements. For instance, Craig discovered a construction for n = 78, and Eva Bayer-Fluckiger gave a more thorough exploration.

The cases of n = 70 and n = 84 are arguably the most interesting, because they endow the Leech lattice with a compatible ring structure where unique factorisation holds: the nonzero lattice points form a multiplicative monoid isomorphic to the direct sum of a cyclic group of order n (the group of roots of unity) and the free commutative monoid on countably many generators.

Of these, the n = 84 ring is my favourite: it contains the Gaussian, Eisenstein, and Kleinian integers as subrings, and indeed is the smallest cyclotomic ring to do so.

Posted in Uncategorized | Leave a comment

Urbit for mathematicians

The three months since the last cp4space article have largely been spent learning about an interesting project called Urbit. My attention was drawn to its existence during a talk by Riva-Melissa Tez (formerly of Intel) on the importance of continued improvements to the computational efficiency of hardware.

A decent amount has been written on the Internet about Urbit, but it tends to either be non-technical, explaining the importance of the project, or technical but written in highly unconventional project-specific jargon. Instead, we’ll depart from either of these by examining the novel internals of Urbit that are most likely to be of interest to mathematicians and theoretical computer scientists.

Before we can do so, however, it is worth asking what Urbit is. This is often a source of confusion, because it is an ecosystem of multiple interacting components designed to mesh well together. If you were to metaphorically run 2-means on the set of Urbit components, they would cleanly separate into the operating system (Urbit OS) and identity system (Urbit ID).

The most interesting salient components of Urbit OS are:

  • Arvo: a deterministic operating system kernel implemented in a pure functional programming language (Hoon).
  • Nock: a very low-level functional programming language. It’s roughly halfway between a minimalistic Lisp dialect and the SKI combinator calculus. In the Urbit ecosystem, Nock is the machine code for the virtual machine in which the Urbit OS runs.
  • Vere: a software implementation (written in C) of the Nock virtual machine, capable of running the VM inside a Unix environment.
  • Hoon: a higher-level functional programming language in which the Urbit OS is directly written. This ultimately compiles down to Nock.

The identity system, Urbit ID, is a hierarchical system of integer addresses (analogous to IPv4 addresses) called Azimuth points. Azimuth points are securely associated to public keys (either mutably or immutably), and communication between machines running Urbit is built upon this public key infrastructure.

Azimuth points and their hierarchy

As mentioned, Azimuth points are non-negative integers. There are five such levels:

  • Points in [0, 2^8 – 1] are called galaxies.
  • Points in [2^8, 2^16 – 1] are called stars.
  • Points in [2^16, 2^32 – 1] are called planets.
  • Points in [2^32, 2^64 – 1] are called moons.
  • Points in [2^64, 2^128 – 1] are called comets.

These can cumulatively be represented by the datatypes uint8, uint16, uint32, uint64, and uint128, respectively. These datatypes possess standard ‘restriction maps’ familiar from using a programming language such as C:

uint64 → uint32 → uint16 → uint8

where, for instance, a 32-bit integer is coerced into a 16-bit integer by taking the lower half (or, equivalently, reducing modulo 2^16). There are two elegant ways to impose a ring structure on each of these 2^n-bit datatypes:

  • Integers modulo 2^(2^n), which form a ring;
  • Nimbers less than 2^(2^n), which form a finite field.

The first choice is the familiar one (in the C programming language, the operators {+, *, -} coincide with modular arithmetic in Z/(2^2^n)Z). In the second choice, ring addition is bitwise XOR, and ring multiplication is slightly more complicated to define.

For each of these two choices, the aforementioned restriction maps are ring homomorphisms.

Now, how do these restriction maps relate to Urbit? They impose a partial order on the set of Azimuth points, where xy if x is the image of y under a restriction map. The Hasse diagram of this partial order is a forest of 256 trees, each of which has a galaxy at its root. This Hasse diagram determines, for each Azimuth point that’s not a galaxy or comet, a unique parent of that Azimuth point. Most planets, for instance, have stars as parents.

The parenthood relationship for comets is different: the parent of a comet is its image under the restriction map uint128 → uint16, which must be one of five stars willing to host comets. Comets are direct hashes of public keys (so, unlike planets, can’t be transferred between different owners).

Planets, stars, and galaxies are tradable as non-fungible tokens on the Ethereum blockchain. This gives a modifiable decentralised mapping from each Azimuth point (32 bits and easily memorable) to an Ethereum address (160 bits and cryptographically secure). As such, it’s possible to trustlessly verify the owner of an Azimuth point and therefore set up a secure communication channel with that Azimuth point; all communications between machines running Urbit OS are established in this manner.

Names of Azimuth points

The integer representing an Azimuth point is converted into a human-readable name (called an ‘@p’) through the following process:

  1. For planets and moons (not for stars and galaxies), a Feistel cipher is applied to the lowest 32 bits of the integer. This is a reversible permutation designed to obfuscate the relationship between the name of a planet and its parent star, reflecting the fact that planets are permitted to move freely between stars.
  2. The bytes of the resulting (enciphered) integer are each converted to a three-letter syllable, using a separate lookup table for even-indexed bytes (suffixes) and odd-indexed bytes (prefixes). Each syllable consists of a generalised vowel flanked by two consonants.
  3. The syllables are written in big-endian, with a hyphen between each 16-bit word and a tilde at the beginning.

An example of a 32-bit planet name is ~sorreg-namtyv, which belongs to the departed founder of Urbit.

A more comprehensive description of the process, including the two 256-element syllable lookup tables, is given in this StackOverflow answer.

In addition to the pronounceable ‘@p’ identifier, there is a visual encoding called a sigil. This stores each byte of the (enciphered!) 32-bit integer in a separate quadrant of a 2-by-2 arrangement, again using a pair of 256-element lookup tables. The sigil for the planet ~firmer-hatful is shown here:

sigil for ~firmer-hatful

Gavin Atkinson describes the design decisions involved in creating this system of sigils. Planets with circular sigils and/or meaningful @p names (e.g. ~timber-walled) tend to be in greater demand, and therefore more expensive, due to the combination of higher scarcity and aesthetic appeal.

The lookup tables mapping bytes to quadrant tiles were populated manually (by drawing 512 tile designs), so the Urbit sigils can be thought of as partially procedurally generated. [A fully procedurally-generated system may also have worked, such as the system in this Wolfram blog post from 2009, but this is unlikely to change now.] This is, nonetheless, highly consistent with the @p names; the syllable lookup tables were also manually populated.

Nock and its emulator

We proceed to turn our gaze from Urbit ID to Urbit OS.

Nock is a low-level pure functional programming language described here. Code and data are simply binary trees (called nouns) whose leaves are nonnegative integers. The Nock interpreter is a unary function * whose domain and codomain are both the space of nouns; its semantics are defined recursively in terms of itself and other unary functions.

My favourite of these functions is the tree-addressing operator, /, which is defined as follows:

/[1 a]              a
/[2 a b]            a
/[3 a b]            b
/[(a + a) b]        /[2 /[a b]]
/[(a + a + 1) b]    /[3 /[a b]]
/a                  /a

In particular, if we let L(x) and R(x) refer to the left and right subtrees of a tree x, then we have:

  • /[1 x] = x
  • /[2 x] = L(x)
  • /[3 x] = R(x)
  • /[4 x] = L(L(x))
  • /[5 x] = R(L(x))
  • /[6 x] = L(R(x))
  • /[7 x] = R(R(x))

and so forth. In general, /[n x] can be evaluated by:

  • expressing n in binary (for example, 1729 is “11011000001”);
  • removing the leading ‘1’ (in this example, giving “1011000001”);
  • replacing each ‘0’ with ‘L’ and ‘1’ with ‘R’ (in this example, “RLRRLLLLLR”);
  • taking x and sequentially applying each of the characters to it (in this example, giving R(L(L(L(L(L(R(R(L(R(x)))))))))), noting that the outermost function is the least significant bit).

If we enumerate the nodes in an infinite binary tree according to the natural number required to address the subtree rooted at that node using the / operator, this agrees with a breadth-first traversal of the tree. This is the same enumeration (the salmon spiral below) that relates the Calkin-Wilf sequence to the Calkin-Wilf tree, shown in this illustration by David Eppstein:

The operator / is useful enough that Lisp distributions typically implement the special-cases of this function for all 2 ≤ n ≤ 15, referring to them as {car, cdr, …, cddddr}.

Nock, if evaluated naively, would be incredibly inefficient: decrementing an integer n takes time proportional to n. The emulator therefore uses a clever idea called jets: runtime optimisations where the emulator pattern-matches various patterns (such as the decrementation function applied to an integer) and calls a native C function to decrement the integer directly.

The Urbit project has succeeded in implementing an entire operating system kernel as a function in this language, and it is performant in practice owing to the use of jets. The Hoon compiler (which compiles programs written in the higher-level language Hoon into the low-level Nock language) generates code that will be successfully pattern-matched by these jets.

With jets, it’s possible to go even lower-level than Nock. Urbit doesn’t do this because there’s not really much point in doing so, except to give a formal definition of Nock in terms of an even simpler language:

Aside: implementing Nock in the SK combinator calculus

If we’re opting for absolute simplicity, why not go the whole way? It’s possible to implement Nock in the untyped lambda calculus or in the SK combinator calculus.

It would be tempting to try the following method of encoding nouns:

  • The nonnegative integers at the leaves of noun trees can be represented using Church numerals: n is the higher-order function which maps its argument f to its n-fold functional composition, f^n.
  • The ‘cons cells’ [a b] can be represented as Church pairs.

However, unfortunately there isn’t a way to query whether a noun is an integer or a pair. As such, we’ll need to use a slightly more complicated encoding of Nock nouns where we ‘tag’ each noun with its type:

  • encode_noun([a b]) := pair(0, pair(encode_noun(a), encode_noun(b)))
  • encode_noun(n) := pair(1, church_numeral(n))

The unary functions used to define the Nock interpreter can now be themselves implemented as combinators, up to and including the Nock interpreter * itself.

For example, the question-mark operator can be implemented by taking the first element of the Church pair (which is the Church numeral 0 or 1) and then ‘boxing’ it as a Nock noun instead of a ‘bare’ Church numeral:

  • ?(x) := pair(1, first(x))

where the ‘pair’ and ‘first’ functions are defined in the Wikipedia article.

The incrementation operator needs to unbox the integer, take its successor, and then rebox it, like so:

  • +(x) := pair(1, succ(second(x)))

Technically, we’ve cheated here, because according to the Nock definition, +(x) needs to also be defined on ordered pairs as well as integers. In particular, we need to use first(x) to extract a Church numeral specifying whether x is an ordered pair or an integer and branch on that: if the Church numeral is 1, then we can just return pair(1, succ(second(x))); if the Church numeral is 0, then we need to return +(x). The self-reference can be removed from this recursive definition by means of a fixed-point combinator.

The remaining operators are all recursive, so they’ll similarly require the fixed-point combinator in their implementations.

It would be a fun exercise to attempt to write a jet-propelled emulator for the SK combinator calculus, including optimisations such as internally representing Church numerals as integers (with a library such as GMP to allow arbitrarily large integers) and using jets to call the GMP implementations of arithmetic operations when appropriate.

By using the ‘compress identical subtrees’ idea, familiar from Hashlife and libraries for manipulating binary decision diagrams, we avoid excessive encoding overhead as well as enabling constant-time recognition of (standard implementations of!) functions such as the decrement operator.

Posted in Uncategorized | 3 Comments

A curious construction of the Mathieu group M11

Previously, we discussed which regular polytopes have vertex-sets that occur as proper subsets of the vertex-set of another regular polytope in the same dimension. In particular, when there is a Hadamard matrix of order 4k, then then the (4k−1)-dimensional simplex can be inscribed in the (4k−1)-dimensional hypercube. Moreover, the converse also holds.

Upon noticing that there exists a unique order-12 Hadamard matrix up to isomorphism, Daniel Sebald asked what the symmetry group is of the 11-dimensional simplex inscribed in the 11-dimensional hypercube. It can be shown that this is exactly the smallest sporadic group, M11, with exactly 11 \times 10 \times 9 \times 8 = 7920.

Another object with a related automorphism group is the perfect ternary Golay code. This is a linear subspace of \mathbb{F}_3^{11} consisting of 729 codewords with the property that any pair of distinct codewords differ in at least 5 coordinates. (Consequently, if you’re given a codeword with ‘errors’ in at most 2 coordinates, you can uniquely recover the original codeword.)

This turns out to be no coincidence! If we label the elements of the field of three elements as {−1, 0, 1}, then exactly 24 of the 729 codewords have no ‘0’-coordinates. These 24 codewords can be regarded as a subset of the vertices of the hypercube {−1, 1}^11, and geometrically they form a pair of mutually dual 11-dimensional simplices inscribed in the hypercube! Eleven dimensions are hard to visualise, so here is a three-dimensional analogue:

Cardboard stella octangula by Joseph Myers

The automorphism group of each individual simplex is the symmetric group S11, but the automorphism group that preserves the bounding hypercube is the Mathieu group M11. If you include both simplices, the groups are doubled (C2 × S11 and C2 × M11, respectively), where the C2 factor is the centre consisting of the identity matrix and its negation.

Posted in Uncategorized | 1 Comment

Assorted topics

This is a digest of things that have happened this month, but which are individually too small to each warrant a separate cp4space post. Firstly, there have been a couple of exciting results in the field of combinatorics:

  • The Erdős-Faber-Lovász conjecture is proved for sufficiently large hypergraphs by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus.
  • A special case of another conjecture by Erdős has been established by Maria Chudnovsky, Alex Scott, Paul Seymour, and Sophie Spirkl.

Also, the functional analysis paper I coauthored with Tomasz Kania has been accepted by the journal Studia Mathematica. This is my first ever article in a peer-reviewed mathematics journal, finally giving me a finite Erdős number (specifically 4)! Many thanks go to the anonymous reviewer for his or her feedback.

The article on the Stockfish NNUE architecture reached the top position on Hacker News, resulting in a disproportionate amount of cp4space traffic (indeed, more views in a single hour than is typical in an entire week).

One quadrillion objects

The Catagolue census of objects arising from random 16-by-16 soups in Conway’s Game of Life has surpassed 10^15 objects. The total number of soups explored in this census is 46 trillion. One of these soups, discovered by Dylan Chen, takes a record-breaking 52513 generations to stabilise.

The parallel GPU-assisted soup search has examined nearly four times as many soups, specifically 164 trillion soups, but due to the search methodology not every object is censused. (Specifically, only soups that last sufficiently long or produce high-period oscillators or rare spaceships are rerun on the CPU and censused; ‘boring’ soups are discarded by the prefiltering stage on the GPU.)

Recent highlights of the GPU soup search include two variants of a period-7 oscillator, one by me and the other by Rob Liston. There was also a 50093-generation soup by Liston which held the longevity record for one week before being surpassed by Dylan Chen’s aforementioned 52513-generation soup.

Taken together, the CPU and GPU searches have simulated 210 trillion random 16-by-16 soups; you can view the collective results here.

BN curves where n has low Hamming weight

We previously discussed Barreto-Naehrig curves and the problem of trying to find curves where the (prime) number of points n on the elliptic curve has a low Hamming weight.

If x is the sum of two powers of 2, the Hamming weight of n is guaranteed to be at most 35, and heuristics based on the prime number theorem suggest that there should be infinitely many such values of x for which p and n are both prime. For example, x = 2^{250} + 2^4 is an example.

The situation is different for Hamming weights strictly below 35. Instead of a two-parameter family such as x = 2^a + 2^b, there appear to only be a finite collection of one-parameter families, and the same heuristics suggest that there are only finitely many examples. In particular, the largest such x that I could find was x = 33 \times 2^{267}, for which the Hamming weight of n is 34.

A particularly nice choice of x (in that it gives a reasonable security level without being too large) is x = 47 \times 2^{56}. The resulting values of n and p are both 252-bit primes, and the Hamming weight of n is 35. Here are the values in hexadecimal:

x = 0x2f00000000000000
p = 0xa787d240000000039081c0000000000cf180000000000011a00000000000001
n = 0xa787d240000000039081c00000000009b520000000000011a00000000000001

If you’re willing to have a smaller bit-length, then x = 17 \times 2^{43} provides a 194-bit prime where the Hamming weight of n is merely 29. Also, because x is congruent to 1 (mod 3), it follows that p is congruent to 4 (mod 9) and cube-roots can be computed efficiently in \mathbb{F}_p as described in Appendix B of the paper introducing BN curves:

x = 0x880000000000
p = 0x2de124000000565c800000006c60000000003300000000001
n = 0x2de124000000565c800000005148000000003300000000001

The security level is quite mediocre, though: it only offers 96-bit security against Pollard’s rho algorithm for discrete log.

Posted in Uncategorized | Leave a comment

Meagre sets and null sets

There are two competing notions for describing a subset of the real numbers as being ‘small’:

  • a null set is a subset of the reals with Lebesgue measure zero;
  • a meagre set is a countable union of nowhere-dense sets.

Both of these properties are downward-closed: an arbitrary subset of a null set is itself a null set, and an arbitrary subset of a meagre set is again a meagre set. Moreover, countable unions of meagre sets are meagre, and countable unions of null sets are null.

These two notions of a set being ‘small’ are also wholly incompatible.

In particular, there exist fat Cantor sets, nowhere-dense closed subsets of the unit interval which have positive Lebesgue measure arbitrarily close to 1. If you take a countable union of these sets (say, with Lebesgue measures of 1/2, 3/4, 7/8, 15/16, and so forth), the result is a meagre subset of [0, 1] with Lebesgue measure 1. If you take a countable union of translates of this set, each one occupying [n, n+1] for each integer n, the result is a meagre subset of the reals whose complement is a null set.

Stated more succinctly, there is a meagre set A (the one we’ve just constructed) and a null set B (its complement) such that A and B are disjoint and their union is the whole real line.

Moreover, A is an F_{\sigma} set (countable union of closed sets) and B is a G_{\delta} set (countable intersection of open sets). It’s possible to prove that every meagre set is a subset of a meagre F_{\sigma} set, and likewise every null set is a subset of a null G_{\delta} set. This turns out to be an ingredient of the proof of…

The Erdős-Sierpiński duality theorem

From what has been mentioned so far, there seems to be some abstract ‘duality’ between null and meagre sets. Erdős and Sierpiński proved, conditional on the continuum hypothesis, a beautiful result that makes this precise:

There exists an involution (self-inverse bijection) f on the set of reals such that {f(x) : x in C} is null if and only if C is meagre, and {f(x) : x in D} is meagre if and only if D is null.

This involution f is highly discontinuous, being constructed using transfinite induction up to 2^{\aleph_0} (which, by assuming the continuum hypothesis, is also equal to the first uncountable cardinal \aleph_1). Shingo Saito describes the construction in detail.

Posted in Uncategorized | Leave a comment