Comment by grey-area
LLMs don’t have any sort of intelligence at present, they have a large corpus of data and can produce modified copies of it.
LLMs don’t have any sort of intelligence at present, they have a large corpus of data and can produce modified copies of it.
Seems like you’d have to prove the inverse.
The threshold would be “produce anything that isn’t identical or a minor transfiguration of input training data.”
In my experience my AI assistant in my code editor can’t do a damn thing that isn’t widely documented and sometimes botches tasks that are thoroughly documented (such as hallucinating parameters names that don’t exist). I can witness this when I reach the edge of common use cases where extending beyond the documentation requires following an implication.
For example, AI can’t seem to understand how to help me in any way with Terraform dynamic credentials because the documentation is very sparse, and it is not part of almost any blog posts or examples online. My definition the variable is populated dynamically and real aren’t shown anywhere. I get a lot of irrelevant nonsense suggestions on how to fix it.
AI is a great “amazing search engine” and it can string together combinations of logic that already exist in documentation and examples while changing some names here and there, but what looks like true understanding really is just token prediction.
IMO the massive amount of training data is making the man behind the curtain look way better than he is.
That's creativity, not intelligence. LLMs can be intelligent while having very little (or even none at all) creativity. I don't believe one necessarily requires the other.
That was an extreme example to illustrate the concept. My point is that reduced/little creativity (which is what the current models have) is not indicative of a total lack of intelligence.
While certainly not human-level intelligence, I don't see how you could say they don't have any sort of it. There's clearly generalization there. What would you say is the threshold?