Comment by Lerc
I have never understood why the failure to answer the strawberry question has seen as a compelling argument as to the limits of AI. The AIs that suffer from this problem have difficulty counting. That has never been denied. Those AI's also do not see the letters of the words they are processing. Counting the letters in a word is a task that it is quite unsurprising that it fails. I Would say it is more surprising that that they can perform spelling tasks at all. More importantly the models where such weaknesses became apparent are all from the same timeframe where the models advanced so much that those weaknesses were visible only after so many other greater weaknesses had been overcome.
People didn't think that planes flying so high that pilots couldn't breathe exposed a fundamental limitation of flight, just that their success had revealed the next hurdle.
The assertion that an LLM is X and therefore not intelligent is not a useful claim to make without either proof that it is X and proof that X is insufficient. You could say brains are interconnected cells that send pulses at intervals dictated by a combination of the pulses they sense, and there is nothing intelligent about that. The premises must be true and you have to demonstrate that the conclusion follows from those premises. For the record I think your premises are false and your conclusion doesn't follow.
Without a proof you could hypothesise reasons why such a system might not be intelligent and come up with an example of a task that no system that satisfies the premises could accomplish. While that example is unsolved the hypothesis remains unrefuted. What would you suggest as a test that shows a problem that could not be solved by such a machine? It must be solvable by at least one intelligent entity to show that it is solvable by intelligence. It must be undeniable when the problem is solved.