Comment by noosphr
Comment by noosphr 2 days ago
The anthropization of llms is getting off the charts.
They don't know they are being evaluated. The underlying distribution is skewed because of training data contamination.
Comment by noosphr 2 days ago
The anthropization of llms is getting off the charts.
They don't know they are being evaluated. The underlying distribution is skewed because of training data contamination.
A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.
It isn't.
Worse they start adding terms like scheming, pretending, awareness, and on and on. At this point you might as well take the model home and introduce it to your parents as your new life partner.
>A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.
Sounds like a purely academic exercise.
Is there any genuine uncertainty about what the term "knowing" means in this context, in practice?
Can you name 2 distinct plausible definitions of "knowing", such that it would matter for the subject at hand which of those 2 definitions they're using?
That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.
Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?
Not to be snarky but “as far as I can tell” is the rub isn’t it?
LLMs are better at matching patterns than we are in some cases. That’s why we made them!
> But one could also argue that humans are pattern-matchers!
No, one could not unless they were being disingenuous.
> The anthropization of llms is getting off the charts.
What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.
I honestly believe there is a degree of sentience in LLMs. Sure, they're not sentient in the human sense, but if you define sentience as whatever humans have, then of course no other entity can be sentient.
>What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.
To simulate a biological neuron you need a 1m parameter neural network.
The sota models that we know the size of are ~650m parameters.
That's the equivalent of a round worm.
So if it quacks like a duck, has the brain power of a round worm, and can't walk then it's probably not a duck.
Ok so you're saying that the technology to make AI truly sentient is there, we just need a little bit more computational power or some optimization tricks. Like raytracing wasn't possible in 1970 but is now. Neat.
How would you prefer to describe this result then?