Comment by Lerc

Comment by Lerc 4 days ago

1 reply

>A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)

This is just your claim, restated. In short it is saying they don't think because they fundamentally can't think.

There is no support as to why this is the case. Any plain assertion that they don't understand is unprovable because you can't measure directly measure understanding.

Please come up with just one measurable property that you can demonstrate is required for intelligence that LLMs fundamentally lack.

Joel_Mckay 4 days ago

We are at a logical impasse... i.e. failure to understand the noted ROC curve is often a metric that matters in ML development, and LLMs are trivially broken at the tokenization layer:

https://en.wikipedia.org/wiki/Receiver_operating_characteris...

Note, introducing a straw-man argument and or bot slop in an unrelated topic is silly. My anecdotal opinion does not really matter on the subject of algorithmic performance standards. yawn... super boring like ML... lol

https://en.wikipedia.org/wiki/File:Yawning_koala_bear_(35893...

Best of luck, =3