Comment by Lerc

Comment by Lerc 5 days ago

7 replies

I have never understood why the failure to answer the strawberry question has seen as a compelling argument as to the limits of AI. The AIs that suffer from this problem have difficulty counting. That has never been denied. Those AI's also do not see the letters of the words they are processing. Counting the letters in a word is a task that it is quite unsurprising that it fails. I Would say it is more surprising that that they can perform spelling tasks at all. More importantly the models where such weaknesses became apparent are all from the same timeframe where the models advanced so much that those weaknesses were visible only after so many other greater weaknesses had been overcome.

People didn't think that planes flying so high that pilots couldn't breathe exposed a fundamental limitation of flight, just that their success had revealed the next hurdle.

The assertion that an LLM is X and therefore not intelligent is not a useful claim to make without either proof that it is X and proof that X is insufficient. You could say brains are interconnected cells that send pulses at intervals dictated by a combination of the pulses they sense, and there is nothing intelligent about that. The premises must be true and you have to demonstrate that the conclusion follows from those premises. For the record I think your premises are false and your conclusion doesn't follow.

Without a proof you could hypothesise reasons why such a system might not be intelligent and come up with an example of a task that no system that satisfies the premises could accomplish. While that example is unsolved the hypothesis remains unrefuted. What would you suggest as a test that shows a problem that could not be solved by such a machine? It must be solvable by at least one intelligent entity to show that it is solvable by intelligence. It must be undeniable when the problem is solved.

tecleandor 5 days ago

   The AIs that suffer from this problem have difficulty counting.
Nope, its not a counting problem. It's a reasoning problem. Thing is, no matter how much hype they get, the AIs have no reasoning capabilities at all, and they can fail in the silliest ways. Same as with Larry Ellison: Don't fall into the trap of anthropomorphizing the AI.
  • Lerc 5 days ago

    Ok, give me an example of what you would consider reasoning.

Joel_Mckay 4 days ago

Is that like 80% LLM slop? the allusion for failures to improve productivity in competent developers was cited in the initial response.

The Strawberry test exposes one of the many subtle problems LLMs inherently offer in the Tokenization approach.

The clown car of Phds may be able to entertain the venture capital folks for awhile, but eventually a VR girlfriend chat-bot convinces a kid to kill themselves like last year.

Again, cognitive development like ethics development is currently impossible for LLM as they are lacking any form of intelligence (artificial or otherwise.) People have patched directives into the model, but these weights are likely fundamentally statistically insignificant due to cultural sarcasm in the data sets.

Please write your own responses, =3

  • Lerc 4 days ago

    You suspect my words of being AI generated while at the same time arguing that AI cannot possibly reason.

    It seems like you see AI where there is not, this compromises your ability to assess the limitations of AI.

    You say that LLMs cannot have any form of intelligence but for some definitions of intelligence it is obvious they do. Existing models are not capable in all areas but they have some abilities. You are asserting that they cannot be intelligent which implies that you have a different definition of intelligence and that LLMs will never satisfy that definition.

    What is that definition for intelligence? How would you prove something does not have it?

    • Joel_Mckay 4 days ago

      "What is that definition for intelligence?"

      That is a very open-ended detractor question, and is philosophically loaded with taboo violations of human neurology. i.e. It could seriously harm people to hear my opinion on the matter... so I will insist I am a USB connected turnip for now ... =)

      "How would you prove something does not have it?"

      A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)

      Have a great day =3

      • Lerc 4 days ago

        >A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)

        This is just your claim, restated. In short it is saying they don't think because they fundamentally can't think.

        There is no support as to why this is the case. Any plain assertion that they don't understand is unprovable because you can't measure directly measure understanding.

        Please come up with just one measurable property that you can demonstrate is required for intelligence that LLMs fundamentally lack.