Comment by ikrima
you know what, I nerd sniped myself, here's a more fleshed out sketch of the [Discrete Continuum Bridge
https://github.com/ikrima/topos.noether/blob/master/discrete...
you know what, I nerd sniped myself, here's a more fleshed out sketch of the [Discrete Continuum Bridge
https://github.com/ikrima/topos.noether/blob/master/discrete...
It seems to be entirely written by an LLM.
[EDITED to add:] This is worth noting because today's LLMs really don't seem to understand mathematics very well. (This may be becoming less so with e.g. o3-pro and o4, but I'm pretty sure that document was not written by either of those.) They're not bad at pushing mathematical words around in plausible-looking ways; they can often solve fairly routine mathematical problems, even ones that aren't easy for humans who unlike the LLMs haven't read every bit of mathematical writing produced to date by the human race; but they don't really understand what they're doing, and the nature of the mistakes they make shows that.
(For the avoidance of doubt, I am not making the tired argument that of course LLMs don't understand anything, they're just pattern-matching, something something stochastic parrots something. So far as I can tell it's perfectly possible that better LLMs, or other near-future AI systems that have a lot in common with LLMs or are mostly built out of LLMs, will be as good at mathematics as the best humans are. I'm just pretty sure that they're still some way off.)
(In particular, if you want to say "humans also don't really understand mathematics, they just push words and symbols around, and some have got very good at it", I don't think that's 100% wrong. Cf. the quotation attributed to John von Neumann: "Young man, in mathematics you don't understand things, you just get used to them." I don't think it's 100% right either, and some of the ways in which some humans are good at mathematics -- e.g., geometric intuition, visualization -- match up with things LLMs aren't currently good at. Anyway, I know of no reason why AI systems couldn't be much better at mathematics than the likes of Terry Tao, never mind e.g. me, but they aren't close enough to that yet for "hey, ChatGPT, please evaluate my speculation that we should be unifying continuous and discrete mathematics via topoi in a way that links aleph, beth and Betti numbers and shows how our brains nucleate discrete samples of continuum reality" to produce output that has value for anything other than inspiration.)
Yup, it's 100% generated by an LLM. I thought that was intentionally clear? (I'm recovering from a TBI so I'm still adjusting to figuring out how to relearn typing; I use the LLMs as my voice mediated interface to typing out thoughts).
I'm not sure there's an argument I'm hearing here other than you seem to have triggered some internal heuristic of "this was written by an LLM" x "It contains math words I don't understand" => "this is bullshit"
which you wouldn't be wrong but I am making a specific constructionist modal logic here using infinity-groupoids from category theory. infinite dimensional categories are a thing and that's what these transfinite numbers represent
you have hyperreal constructionists of the reals as well which follows nonstandard analysis. you can also use the Weil cohomology which IIRC gets us most of calculus without the axiom of choice but someone check me on that.
so....again, not sure what your specific critique is?
No specific critique here other than "it was written by an LLM and this seems worth pointing out given that LLMs are bad at actually understanding difficult mathematics".
(In a different comment I make some actual criticisms of what you wrote. I see you replied to my comment there, and that's a more appropriate place to discuss actual ideas. I don't see much point in criticizing LLM output in a field LLMs are bad at.)
Anyway: (1) no, it wasn't clear. I wouldn't generally take "I nerd-sniped myself. Here's a more fleshed-out sketch of ..." to mean "Here's something written for me by an LLM". I'd take it to imply that the person had done the fleshing-out themself. And (2) no, the problem wasn't that you used words I don't understand. It's certainly possible that your ideas are excellent and I just don't understand them, but I'm a mathematician myself and none of the words scare me.
"no, it wasn't clear. I wouldn't generally take "I nerd-sniped myself. Here's a more fleshed-out sketch of ..." to mean "Here's something written for me by an LLM". I'd take it to imply that the person had done the fleshing-out themself."
aaaaaaaah, I think you finally helped me notice something subtle in the way I use LLMs than other people. It sounds obvious now that I think about it but I never considered people use LLMs like google whereas I use it more like a real time thought transcriber (e.g. Dragon Naturally Speaking but not shite :P) Since it's trained on a RAG based off of my own polished thoughts, I've set it up as a meta-circular evaluator to do linguistic filtration (basically Fourier kernels on clip embedding space that map to various measures of "conceptual clarity").
So the LLM-ness of it to me is a clear-flag that this is hastily dictated.
There are others out here thinking along similar lines (in my case, with massive help from LLMs). Proof: https://claude.ai/share/a8128fde-ea47-4dd8-a284-16a1fd76240c . Also, I have a GitHub too: https://github.com/bobshafer/PITkit/blob/main/Links.md