Comment by btilly

Comment by btilly 4 days ago

61 replies

This is a stack overflow question that I turned into a blog post.

It covers both the limits of what can be proven in the Peano Axioms, and how one would begin bootstrapping Lisp in the Peano Axioms. All of the bad jokes are in the second section.

Corrections and follow-up questions are welcome.

doodlebugging 3 days ago

After putting on my boots and wading through all of that I think that you have one edit to make.

In the "Why Lisp?" section there is a bit of basic logic defined. The first of those functions appears to have unbalanced parentheses.

>(defun not (x) > (if x > false > true)

I have a compulsion that I can't control when someone starts using parentheses. I have to scan to see who does what with who.

You later say in the same section

>But it is really easy to program a computer to look for balanced parentheses

Okay. This is pretty funny. Thanks for the laugh. I realize that you weren't doing that but it still is funny that you pointed out being able to do it.

This later comment in the "Basic Number Theory" section was also funny.

>; After a while, you stop noticing that stack of closing parens.

I really enjoyed reading this post. Great job. Though it has been a long time since I did anything with Lisp I was able to walk through and get the gist again.

  • btilly 3 days ago

    Thank you for the correction!

    And I'm glad that someone liked some of my attempts at humor. As my wife sometimes says, "I know that you were trying to make a joke, because it wasn't funny."

    • doodlebugging 3 days ago

      It was a great read. I enjoyed how you laid it all out. It reminded me of some of my upper level math coursework. Easy to follow if you take it step by step and stop to consider the implications. Some things become obvious to the most casual observer.

    • whatagreatboy 3 days ago

      I found all the jokes funny as well. Thanks for the blog post. Extremely nice read. I love the approach.

  • kazinator 3 days ago

    > I have to scan to see who does what with who.

    Are you saying that parentheses introduce the problem of having to scan to see what goes with what?

    As in, if we don't have parentheses, but still have recursive/nested structure, we don't have to scan?

    • doodlebugging 3 days ago

      For myself the issue goes back to my college mathematics courses, especially differential equations. I worked those homework problems by hand on a large format tablet, roughly 24" x 36", carefully laying them out step by step so that I could walk through them in the future and make sense of the solution process. Counting and matching parentheses was pretty critical since a missed parenthesis may not pop out at you like it would in a compiler error or by walking through code.

      I automatically count and match even today, 40 years later.

    • Retr0id 3 days ago

      Python block indentation is an example of nested structure that's at least easier to visually scan. You don't need to count opening/closing parens, just look down a column of text - assuming nobody mixed tabs and spaces. (but I wouldn't go as far as saying you don't need to scan it)

      • User23 3 days ago

        Canonically indented Lisp reads an awful lot like Python. You don’t read the braces, you read the indentation.

edanm 3 days ago

This is fascinating! I haven't read much past the intro yet, but I find the whole premise that you can prove all specific instances of Goodstein sequences terminate at 0 within PA, but not that all sequences terminate (it's a trivial result but still interesting).

I also find it super weird that Peano axioms are enough to encode computation. Again, this might be trivial if you think about it, but that's one self-referential layer more than I've thought about before.

One question for you btilly - oddly enough, I just recently decided to learn more Set Theory, and actually worked on an Intro to Set Theory textbook up to Goodstein sequences just last week. I'm a bit past that.

Do you have a good recommendation for a second, advanced Set Theory textbook? Also, any recommendation for a textbook that digs into Peano arithmetic? (My mini goal since learning the proof for Goodstein sequences is to work up to understanding the proof that Peano isn't strong enough to prove Goodstein's theorem, though I'll happily take other paths instead if they're strongly recommended.)

  • btilly a day ago

    My apologies for having missed this request.

    I don't have good suggestions for a good set theory textbook. Grad school was 30 years ago, and I didn't specialize in logic.

    The best set theory book that I read was https://www.amazon.com/Naive-Theory-Undergraduate-Texts-Math.... But that one is aimed at people who want to go into math but do not wish to specialize in set theory, and not at people who want to actually learn set theory.

    • edanm a day ago

      Thanks! I read through a bunch of that textbook when I first decided to learn more maths a few years ago, but I relatively quickly started dual-reading a different textbook ("Introduction to Set Theory" by Hrbacek and Jeck), which is a more in-depth, non-naive introduction. That's the book I'm continuing to work through now.

      • btilly 17 hours ago

        Then you likely know more set theory than I do!

Cheyana 4 days ago

Thanks for this. In another strange internet coincidence, I was asking ChatGPT to break down the fundamentals of the Peano axioms just yesterday and now I see this. Thumbs up!

  • burnt-resistor 3 days ago

    I suspect in the near future (if not already) ChatGPT data will be sold to data brokers and bough by Amazon such that writing a prompt will end up polluting Alexa product recommendations within a few minutes to hours.

    • random3 2 days ago

      I suspect that in the near future nobody will directly read product recommendations.

    • lmpdev 3 days ago

      Oh for fuck sake

      Can we not ruin every technology we develop with ads?

      • brookst 3 days ago

        Let’s also not ruin every discussion with speculation about how someone will eventually ruin everything with ads.

      • chii 3 days ago

        when there's a need to make money, but there's noone who wants to pay (but they want to use the service), then it naturally falls onto being ad-driven.

  • im3w1l 3 days ago

    Well there was a post on mathmemes a day ago about teaching kids set theory as a foundation for math with some discussion of PA. So maybe related ideas are echoing across the intertubes in this moment?

    • kjellsbells 3 days ago

      > teaching kids set theory as a foundation for math

      Very reminiscent of the New Math pedagogy of the 1960s. Built up arithmetic entirely from sets. Crashed and burned for various reasons but I always had a soft spot for it. It also served as my introduction to binary arithmetic and topology.

  • Art9681 3 days ago

    I've noticed this too. I will be researching a topic elsewhere and then it seems to pop up in HN. Am I just looking for patterns where there are none, or is there some trickery happening where HN tracks those activities and mixes in posts more relevant to my interests with the others?

    • nemomarx 3 days ago

      HN has a single front page for everyone, so it's a recency illusion. You pay more attention to details and skim over titles you haven't been thinking about.

      • gjm11 3 days ago

        It may also be that there's some common cause for (1) a topic being on HN's front page and (2) someone researching it elsewhere. E.g., maybe a couple of days ago there was an interesting social-media post related to it, and (1) one person saw it and wrote up something interesting enough to get on HN and (2) another person saw it, started digging around to find out more, and then read HN.

        (I am not suggesting that that very specific sequence of events happened in this case. It's just an example of how "I was looking this up for other reasons and then I saw something about it on HN" could genuinely happen more often than by chance, without any need for weird tracking-and-advertising shenanigans.)

  • RcouF1uZ4gsC 3 days ago

    This proves that ChatGPT sells your data to HN which then decides which posts to put on the front page.

    • billyjmc 3 days ago

      Incidentally, this also proves that GP is the main character.

anthk 3 days ago

Boot sector Lisp bootstraps itself.

https://justine.lol/sectorlisp2/

Also, lots of Lisp from https://t3x.org implement numerals (and the rest of stuff) from cons cells and apply/eval:

'John McCarthy discovered an elegant self-defining way to compute the above steps, more commonly known as the metacircular evaluator. Alan Kay once described this code as the "Maxwell's equations of software". Here are those equations as implemented by SectorLISP:

ASSOC EVAL EVCON APPLY EVLIS PAIRLIS '

Ditto with some Forths.

Back to T3X, he author has Zenlisp where the meta-circular evaluation it's basically how to define eval/apply and how to they ared called between themselves in a recursive way.

http://t3x.org/zsp/index.html

  • btilly 3 days ago

    I knew that this sort of stuff was possible, but it is fun to see it.

    When it comes to bootstrapping a programming language from nothing, the two best options are Lisp and Forth. Of the two, I find Lisp easier to understand.

Timwi 3 days ago

> Corrections and follow-up questions are welcome.

There are two places where you accidentally wrote “omega” instead of “\omega”.

  • btilly 3 days ago

    Thanks!

    I am away from my computer for the day, but I will fux it later.

    • gjm11 3 days ago

      > I will fux it later

      I think the problem is that it's already fuxed.

[removed] 3 days ago
[deleted]
[removed] 3 days ago
[deleted]
ikrima 3 days ago

Hey! This is fantastic and actually ties in some very high disparate parts of math. Basically, reorient & reformulate all of math/epistomology around discrete sampling the continuum. Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)

Then every modal logic becomes some mapping of 2^(N) to some set of statements. The only thing that matters is how predictive they are with some sort of objective function/metric/measure but you can always induce an "ultra metric" around notions of cognitive complexity classes i.e. your brain is finite and can compute finite thoughts/second. Thus for all cognition models that compute some meta-logic around some objective F, we can motivate that less complex models are "better". There comes the ultra measure to tie disparate logic systems. So I can take your Peano Axioms and induce a ternary logic (True, False, Maybe) or an indefinite-definite logic (True or something else entirely). I can even induce bayesian logics by doing power sets of T/F. So a 2x2 bayesian inference logic: (True Positive, True Negative, False Positive, False Negative)

Fun stuff!

Edit: The technical tldr that I left out is unification all math imho: algebraic topology + differential geometry + tropical geometry + algebraic analysis. D-modules and Microlocal Calculus from Kashiwara and the Yoneda lemma encode all of epistemology as relational: either between objects or the interaction between objects defined as collision less Planck hyper volumes.

basically encodes the particle-wave duality as discrete-continuum and all of epistemology is Grothendieck topoi + derived categories + functorial spaces between isometry of those dual spaces whether algebras/coalgebra (discrete modality) or homologies/cohomologies (continuous actions)

Edit 2: The thing that ties everything together is Noether's symmetry/conserved quantities which (my own wild ass hunch) are best encoded as "modular forms", arithmetic's final mystery. The continuous symmetry I think makes it easy to think about diffeomorphisms from different topoi by extracting homeomorphisms from gauge invariant symmetries (in the discrete case it's a lattice, but in the continuous we'd have to formalize some notion of liquid or fluid bases? I think Kashiwara's crystal bases has some utility there but this is so beyond my understanding )

  • ricardobeat 3 days ago

    > Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)

    There’s probably ten+ years of math education encoded in this single sentence?

    • gjm11 3 days ago

      My apologies to ikrima for being critical, but I think anyone who thinks "aleph/beth/Betti numbers" is a coherent set of things to put together is just very confused.

      Aleph and beth numbers are related things, in the field of set theory. (Two sequences[1] of infinite cardinal numbers. The alephs are all the infinite cardinals, if the axiom of choice holds. The beth numbers are the specific ones you get by repeatedly taking powersets. They're only all the cardinals if the "generalized continuum hypothesis" holds, a much stronger condition.)

      [1] It's not clear that this is quite the right word, but no matter.

      Betti numbers are something totally different. (If you have a topological space, you can compute a sequence[2] of numbers called Betti numbers that describe some of its features. (They are the ranks of its homology groups. The usual handwavy thing to say is that they describe how many d-dimensional "holes" the space has, for each d.)

      [2] This time in exactly the usual sense.

      It's not quite true that there is no connection between these things, because there are connections between any two things in pure mathematics and that's one of its delights. But so far as I can see the only connections are very indirect. (Aleph and beth numbers have to do with set theory. Betti numbers have to do with topology. There is a thing called topos theory that connects set theory and topology in interesting ways. But so far as I know this relationship doesn't produce any particular connection between infinite cardinals and the homology groups of topological spaces.)

      I think ikrima's sentence is mathematically-flavoured word salad. (I think "Betti" comes after "beth" mostly because they sound similar.) You could probably take ten years to get familiar with all the individual ideas it alludes to, but having done so you wouldn't understand that sentence because there isn't anything there to understand.

      BUT I am not myself a topos theorist, nor an expert in "our human brain's sensory instruments". Maybe there's more "there" there than it looks like to me and I'm just too stupid to understand. My guess would be not, but you doubtless already worked that out.

      [EDITED to add:] On reflection, "word salad" is a bit much. E.g., it's reasonable to suggest that our senses are doing something like discrete sampling of a continuous world. (Or something like bandwidth-limited sampling, which is kinda only a Fourier transform away from being discrete.) But I continue to think the details look more like buzzword-slinging than like actual insight, and that "aleph/beth/Betti" thing really rings alarm bells.

      • ikrima 2 days ago

        also you're onto the actual quantum mechanics paper I'm working on. QM/QFT is modern day epicycles: arbitrarily complex because it was the aliasing the natural deeper representation which was Fourier/Spectral analysis.

        Reformulating our entire ontology around relational mechanics is the answer imho. So Carlo Ravoli's RQM is right but I think it doesn't go far enough. Construct a grothendeik topos with a spacetime cohomology around different scales of both space and time with some sort of indefinite conservation and you get collision less Planck hyper volumes that map naturally to particle-wave duality interpretations of QM.

      • ikrima 2 days ago

        lol, it's a sketch of a proof covering a large swath of unexplored math. the other poster wasn't wrong when he said I smashed 10y+ of graduate math in one sentence.

        Aleph numbers = these are cardinals sizes of infinity; depending on your choice of axioms, ZFC or not, you have the continuum hypothesis of aleph0 = naturals, aleph1= 2^N = Continuum

        Beth numbers are transfinite ordinals => they generalize infinitesimals like the 1st, 2nd, 3rd. so you can think of them as a dual or co-algebra (I'm hand waving here, it's been twenty years since real analysis).

        Betti numbers are for persistent cohomology; they track holes similar to genus

        I mean there's a lot to cover between tropical geometry, differential geometry, and algebraic analysis. So sometimes alarm bells are false alarms and your random internet commenter knows what he's talking about but is admittedly too sloppy but it's 5 pm on a Saturday and I wrote that in the morning while making breakfast eggs, not for submission to the annals of Mathematics!

        Thank you for coming to my TED Stand Up Talk.

        More math at the GitHub: http://github.com/ikrima/topos.noether

        Also, if you're really that uptight, most of this is actually to teach algebraic topology to my autistic nonverbal nephew because I'm gonna gamify it as a magic spell system

        So it'll be open source and that begs the question, if you use it to learn something, did that mean I just zero-proof zero-knowledge something out of you that I didn't even need to know by making a self referential statement across both space & time?

        peace out my ninja!

      • ikrima 2 days ago

        I mean you wouldn't be wrong to assume so but how can you expect anyone to saliently condense the entirety of a 10 year long proof of Grothendieck topos to 3 or 4 sentences my guy!

    • ikrima 3 days ago

      you know what, I nerd sniped myself, here's a more fleshed out sketch of the [Discrete Continuum Bridge

      https://github.com/ikrima/topos.noether/blob/master/discrete...

      • gjm11 3 days ago

        It seems to be entirely written by an LLM.

        [EDITED to add:] This is worth noting because today's LLMs really don't seem to understand mathematics very well. (This may be becoming less so with e.g. o3-pro and o4, but I'm pretty sure that document was not written by either of those.) They're not bad at pushing mathematical words around in plausible-looking ways; they can often solve fairly routine mathematical problems, even ones that aren't easy for humans who unlike the LLMs haven't read every bit of mathematical writing produced to date by the human race; but they don't really understand what they're doing, and the nature of the mistakes they make shows that.

        (For the avoidance of doubt, I am not making the tired argument that of course LLMs don't understand anything, they're just pattern-matching, something something stochastic parrots something. So far as I can tell it's perfectly possible that better LLMs, or other near-future AI systems that have a lot in common with LLMs or are mostly built out of LLMs, will be as good at mathematics as the best humans are. I'm just pretty sure that they're still some way off.)

        (In particular, if you want to say "humans also don't really understand mathematics, they just push words and symbols around, and some have got very good at it", I don't think that's 100% wrong. Cf. the quotation attributed to John von Neumann: "Young man, in mathematics you don't understand things, you just get used to them." I don't think it's 100% right either, and some of the ways in which some humans are good at mathematics -- e.g., geometric intuition, visualization -- match up with things LLMs aren't currently good at. Anyway, I know of no reason why AI systems couldn't be much better at mathematics than the likes of Terry Tao, never mind e.g. me, but they aren't close enough to that yet for "hey, ChatGPT, please evaluate my speculation that we should be unifying continuous and discrete mathematics via topoi in a way that links aleph, beth and Betti numbers and shows how our brains nucleate discrete samples of continuum reality" to produce output that has value for anything other than inspiration.)

  • ikrima 3 days ago

    You know what, since you put in all that work, here's my version using p-adic geometry to generalize the concept of time as a local relativistic "motive" (from category theory) notion of ordering (i.e. analogous to Grothendieck's generalization of functions as being point samples along curves of a basis of distributions to generalize notions of derivatives):

    https://github.com/ikrima/topos.noether/blob/aeb55d403213089...