The largest number representable in 64 bits
(tromp.github.io)67 points by tromp 5 hours ago
67 points by tromp 5 hours ago
Related. Others?
The largest number representable in 64 bits - https://news.ycombinator.com/item?id=38414303 - Nov 2023 (105 comments)
The largest number representable in 64 bits - https://news.ycombinator.com/item?id=35677148 - April 2023 (42 comments)
(I haven't put 2023 in the current title since the article says it's been significantly expanded since then.)
It all goes over my head, but, what does the distribution of values look like? e.g. for unsigned integers its completely flat, for floating point its far too many zeros, and most of the numbers are centered around 0, what do these systems end up looking like?
Let me go ahead and compute that for all halting lambda terms of length at most 33 bits. The output I got from a modified BB.lhs is (giving the normal form size and the number of terms with that normal form size):
4x208506 6x203638 7x93072 8x202741 9x62039 10x189422 11x101450 12x183896 13x96804 14x167842 15x103631 16x131387 17x100319 18x161560 19x148361 20x180227 21x117866 22x82568 23x90577 24x136315 25x158660 26x207930 27x181334 28x33308 29x33331 30x52430 31x80559 32x140753 33x231169 34x3643 35x1356 36x2817 37x1162 38x2067 39x707 40x1820 41x414 42x1316 43x226 44x1026 45x230 46x663 47x142 48x189 49x150 50x189 51x63 52x102 53x169 54x161 55x24 56x71 57x88 58x48 59x6 60x63 61x11 62x19 63x3 64x18 65x11 66x20 67x10 68x13 69x4 70x6 71x11 72x8 73x12 74x10 75x7 76x9 77x5 78x6 79x5 80x4 81x3 82x9 84x6 85x2 86x3 87x3 88x13 89x3 90x6 92x5 94x3 95x2 96x9 101x1 102x3 103x1 106x2 108x2 109x1 111x3 112x1 113x3 115x1 117x1 118x1 120x2 121x1 122x1 124x1 127x3 128x1 130x2 132x1 133x1 134x3 141x1 142x3 143x2 144x1 146x1 148x1 149x2 158x1 159x1 160x3 161x1 162x7 164x3 166x1 179x1 180x1 187x2 199x1 202x2 203x1 217x1 223x1 225x1 227x4 242x1 247x2 267x1 268x1 269x1 280x1 296x1 298x1 331x1 363x1 394x1 432x1 475x1 484x1 544x1 673x1 708x1 820x1 1364x1 1812x1
What's the biggest up-arrow notation number you can spell with 64 bits?
`9↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑9` seems like a reasonable guess (barring encoding cheats/trickery like @masfuerte commented!)
Edit: I've misread the above comment and my number is is 64 bytes (significantly more than 64 bits. The largest 64 bit number through my approach would be `9↑↑↑↑↑↑9`, which is significantly smaller.
Please no more comments to the extent of "i can define a much larger number in only 1 bit". What makes my blog post (hopefully) interesting is that I consider tiny programs for computing huge numbers in non-cheating languages, that are not specifically equipped for doing so.
> that headline doesn't even include "program" (or "compute").
Neither does Scott's article titled "Who Can Name the Bigger Number?" [1]
The title is just a way to invite the reader to find out why the answer isn't simply 2^64-1.
Question: but how many different numbers can you fit in 64 bits using your encoding (sorry I understand the general approach but I have no idea how that fast hierarchy works). I guess it's still 2^64 different numbers?
So basically you have a very low density of representable numbers (2^64 / w218), I wonder how quickly it grows as you use more and more 1-bits, and is there even a correlation between the bit pattern and the corresponding number value?
BLC can output any literal 60 bit string x as the 64-bit (delimited) program 0010 x, so in that sense it would be some 61 bit number. But if ask about just lambda calculus terms without the binary input, then I think it would be some small number of at most 10 bits. BBλ looks at the normal form size so it cannot even reach numbers 0,1,2,3, and 5.
"Computable" has a well-known standard definition in this context, meaning a computable function[1]. In a given model of computation, a computable function is one for which an algorithm exists which computes the value of the function for every value of its argument. For example, the successor function adds 1 to an input number, and is computable. The halting problem (determine whether a program given in the argument halts) is not computable.
I'm going to agree with the downvoted people and say that this sort of approach is largely meaningless if you allow arbitrary mappings. IMO the most reasonable mathematical formulation given the structure of the integers (in the sense of e.g. Peano) is that to truly represent an integer you have to represent zero and each other representable number has a representable predecessor, i.e. to say you can represent 5 you need 0,1,2,3,4, and 5 to be representable. By a straightforward counting argument, 2^64-1 is then the largest representable number, in other words the obvious thing is right.
As I've replies several times before, we don't allow arbitrary mappings. We allow computable mappings but consider only obviously non-cheating languages like Turing machines or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.
It's not "the largest representable number" because you're not representing numbers in any rigorous sense. If I give you 64 bits, you can't tell me what number those bits represent (first, because the rules of the game are ambiguous - what if I give you 8 bytes that are a valid program in two different languages; and second, because even if you made the rules precise, you don't know which bitstrings correspond to programs that halt). And if I give you a number, you can't tell me which 64 bits represent that number or even if the number is representable, and that's true even for small numbers and even if I give you unbounded time.
It seems far more natural to say that you're representing programs rather than numbers. And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less. Which is also fun and interesting!
> If I give you 64 bits, you can't tell me what number those bits represent
You have to tell me the (non-cheating) programming language that the 64 bit program is written in as well.
> And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less.
That's what the post ends up saying, after first discussing conventional representations, and then explicitly widening the representations to programs in (non-cheating) languages.
I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.
Turing Machines and Lambda Calculus can only output insanely large numbers by building those numbers from scratch using their Turing completeness. So while lambda calculus can output something exceeding Loader's Number, it needs well over a thousand bits to do so. What I mean by "geared toward outputting insanely large numbers" is saying: I define a language in which the 1-bit program "0" outputs Loader's Number. That is obviously cheating.
There is unfortunately no mathematically rigorous way to define what is cheating, so it seems unreasonable to ask me for that.
That is not a number, that is infinity.
The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.
(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)
This game assumes the computations run to completion on systems that will never run out of resources. No one in this universe will ever compute Ackerman's Number, BB(6), or the final answer given in the post. Computations that never complete are infinite.
If you are playing this game and can't produce a number that doesn't fit in this universe you are probably better suited playing something else. That's just table stakes. If it even qualifies as that. "Inscribe every subatomic particle in the universe with a 9 every planck instant of the universe until the heat death of the universe" doesn't even get off the starting line in games like this.
Another general comment: It feels like a lot of people are really flailing around here, and need to understand this is a game. It has rules. If you change the rules, you are playing a different game. There is nothing wrong with playing a different game. It is just a different game. The game is not immutably written in the structure of the universe, or a mathematical truth, it is a human choice. And there isn't necessarily a "why" to the rules any more than there's a "why" to why the bishop moves as it does in chess. You can, in fact, change that rule. There are thousands of such variants. It's just that you're playing a different game than chess at that point. If you don't want to play the author's game, then that's fine, but it doesn't change the game itself. And proposing different solutions is equivalent to saying that you can win a chess game by just flipping over the board and yelling "I win". You can do that. Perhaps you've even won some game. But whatever game you just won, it isn't chess.
The post addresses this very issue:
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.”
Following BLC8's bytewise encoding convention of [1], w218's binary encoding 0100 0101 1010 1000 0110 0110 0000 0001 0101 1011 1011 0000 0011 1001 1101 0 gets padded with 3 arbitrary least significant bits, say 000, and becomes 45A8_6601_5BB0_39C0 in hexadecimal.
wow that entry to the international obfuscated c code contest
To be pedantic, that is a instance of the Berry paradox [1] and no you can not [2] as that would be a violation of Godel's incompleteness theorems.
edit: To clarify further, you could create a new formal language L+ that axiomatically defines 0 as "largest number according to L", but that would no longer be L, it would be L+. For any given language with rules at this level of power you could not make that statement without creating a new language with even more powerful rules i.e. each specific set of rules is capped, you need to add more rules to increase that cap, but that is a different language.
[1] https://en.wikipedia.org/wiki/Berry_paradox
[2] https://terrytao.wordpress.com/2010/11/02/the-no-self-defeat...
To be more pedantic, yes you can, but only with a meta-language.
> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.
Sorry; no winning for cheaters:-(
This feels like the computer science version of this article: https://www.scottaaronson.com/writings/bignumbers.html