Comment by James_K
It may genuinely be the case that slower humans are not generally intelligent. But that sounds rather snobbish so it's not an opinion I'd like to express frequently.
I think the complaint made by apple is quite logical though and you mischaracterise it here. The question asked in the Apple study was "if I give you the algorithm that solves a puzzle, can you solve that puzzle?" The answer for most humans should be yes. Indeed, the answer is yes for computers which are not generally intelligent. Models failed to execute the algorithm. This suggests that the models are far inferior to the human mind in terms of their computational ability, which precedes general intelligence if you ask me. It seems to indicate that the models are using more of a "guess and check" approach than actually thinking. (A specifically interesting result was that model performance did not substantially improve between a puzzle with the solution algorithm given, and one where no algorithm was given.)
You can sort of imagine the human mind as the head of a Turing Machine which operates on language tokens, and the goal of an LLM is to imitate the internal logic of that head. This paper seems to demonstrate that they are not very good at doing that. It makes a lot of sense when you think about it, because the models work by consuming their entire input at once where the human mind operates with only a small working memory. A fundamental architectural difference which I suspect is the cause of the collapse noted in the Apple paper.
I think a human will struggle to solve Hanoi using the recursive algorithm for even 6 disks, even given pen and paper.
Does that change if you give them the algorithm description? No. Conversely, the LLMs already know the algorithm, so including it in the prompt makes no difference.