Comment by PaulHoule
I can sure talk your ear off about that one as I went way too far into the semantic web rabbit hole.
Training LLMs to use 'tools' of various types is a great idea, as it is to run them inside frameworks that check that their output satisfies various constraints. Still certain problems like the NP-complete nature of SAT solving (and many intelligent systems problems, such as word problems you'd expect an A.I. to solve, boil down to SAT solving) and problems such as the halting problem, Godel's theorem and such are still problems. I understand Doug Hofstader has softened his positions lately, but I think many of the problems set up in this book
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
(particularly the Achilles & Tortoise dialog) still stand today, as cringey as that book seems to me in 2025.
i am hoping for an slm "turing tape" small language model where the tokens are instructions for a copycat engine