Comment by gjm11
It seems to be entirely written by an LLM.
[EDITED to add:] This is worth noting because today's LLMs really don't seem to understand mathematics very well. (This may be becoming less so with e.g. o3-pro and o4, but I'm pretty sure that document was not written by either of those.) They're not bad at pushing mathematical words around in plausible-looking ways; they can often solve fairly routine mathematical problems, even ones that aren't easy for humans who unlike the LLMs haven't read every bit of mathematical writing produced to date by the human race; but they don't really understand what they're doing, and the nature of the mistakes they make shows that.
(For the avoidance of doubt, I am not making the tired argument that of course LLMs don't understand anything, they're just pattern-matching, something something stochastic parrots something. So far as I can tell it's perfectly possible that better LLMs, or other near-future AI systems that have a lot in common with LLMs or are mostly built out of LLMs, will be as good at mathematics as the best humans are. I'm just pretty sure that they're still some way off.)
(In particular, if you want to say "humans also don't really understand mathematics, they just push words and symbols around, and some have got very good at it", I don't think that's 100% wrong. Cf. the quotation attributed to John von Neumann: "Young man, in mathematics you don't understand things, you just get used to them." I don't think it's 100% right either, and some of the ways in which some humans are good at mathematics -- e.g., geometric intuition, visualization -- match up with things LLMs aren't currently good at. Anyway, I know of no reason why AI systems couldn't be much better at mathematics than the likes of Terry Tao, never mind e.g. me, but they aren't close enough to that yet for "hey, ChatGPT, please evaluate my speculation that we should be unifying continuous and discrete mathematics via topoi in a way that links aleph, beth and Betti numbers and shows how our brains nucleate discrete samples of continuum reality" to produce output that has value for anything other than inspiration.)
Yup, it's 100% generated by an LLM. I thought that was intentionally clear? (I'm recovering from a TBI so I'm still adjusting to figuring out how to relearn typing; I use the LLMs as my voice mediated interface to typing out thoughts).
I'm not sure there's an argument I'm hearing here other than you seem to have triggered some internal heuristic of "this was written by an LLM" x "It contains math words I don't understand" => "this is bullshit"
which you wouldn't be wrong but I am making a specific constructionist modal logic here using infinity-groupoids from category theory. infinite dimensional categories are a thing and that's what these transfinite numbers represent
you have hyperreal constructionists of the reals as well which follows nonstandard analysis. you can also use the Weil cohomology which IIRC gets us most of calculus without the axiom of choice but someone check me on that.
so....again, not sure what your specific critique is?