Comment by Terr_
> Query GPT-5 medium thinking on the API on up to (I didn't bother testing higher) 13 digit multiplication of any random numbers you wish. Then watch it get it exactly right.
I'm not sure if "on the API" here means "the LLM and nothing else." This is important because it's easy to overestimate the algorithm when you give it credit for work it didn't actually do.
In general, human developers have taken steps to make the LLM transcribe the text you entered into a classically-made program, such as a calculator app, python, or Wolfram Alpha. Without that, the LLM would have to use its (admittedly strong) powers of probabilistic fakery [0].
Why does it matter? Suppose I claimed I had taught a chicken to do square roots. Suspicious, you peer behind the curtain, and find that the chicken was trained to see symbols on a big screen and peck the matching keys on pocket calculator. Wouldn't you call me a fraud for that?
_____________
Returning to the core argument:
1. "Reasoning" that includes algebra, syllogisms, deduction, etc. involves certain processes for reaching an answer. Getting a "good" answer through another route (like an informed guess) is not equivalent.
2. If an algorithm cannot do the algebra process, it is highly unlikely that it can do the others.
3. If an algorithm has been caught faking the algebra process through other means, any "good" results for other forms of logic should be considered inherently suspect.
4. LLMs are one of the algorithms in points 2 and 3.
_____________
[0] https://www.mindprison.cc/p/why-llms-dont-ask-for-calculator...
>I'm not sure if "on the API" here means "the LLM and nothing else." This is important because it's easy to overestimate the algorithm when you give it credit for work it didn't actually do.
That's what I mean yes. There is no tool use for I what I mentioned.
>1. "Reasoning" that includes algebra, syllogisms, deduction, etc. involves certain processes for reaching an answer. Getting a "good" answer through another route (like an informed guess) is not equivalent.
Again if you cannot confirm that these 'certain processes' are present when humans do it but not when LLMs do it then your 'processes' might as well be made up.
And unless you concede humans are also not performing 'true algebra' or 'true reasoning', then your position is not even logically consistent. You can't eat your cake and have it.