Comment by noosphr

Comment by noosphr 2 days ago

16 replies

The anthropization of llms is getting off the charts.

They don't know they are being evaluated. The underlying distribution is skewed because of training data contamination.

0xDEAFBEAD 2 days ago

How would you prefer to describe this result then?

  • noosphr 2 days ago

    A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.

    It isn't.

    Worse they start adding terms like scheming, pretending, awareness, and on and on. At this point you might as well take the model home and introduce it to your parents as your new life partner.

    • 0xDEAFBEAD 2 days ago

      >A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.

      Sounds like a purely academic exercise.

      Is there any genuine uncertainty about what the term "knowing" means in this context, in practice?

      Can you name 2 distinct plausible definitions of "knowing", such that it would matter for the subject at hand which of those 2 definitions they're using?

      • Msurrow 2 days ago

        > Sounds like a purely academic exercise.

        Well, yes. It’s an academic research paper (I assume since it’s submitted to arXiv) and to be submitted to academic journals/conferences/etc., so it’s a fairly reasonable critique of the authors/the paper.

  • devmor 2 days ago

    One could say, for instance… A pattern matching algorithm detects when patterns match.

    • 0xDEAFBEAD 2 days ago

      That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.

      Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?

      • devmor 2 days ago

        Not to be snarky but “as far as I can tell” is the rub isn’t it?

        LLMs are better at matching patterns than we are in some cases. That’s why we made them!

        > But one could also argue that humans are pattern-matchers!

        No, one could not unless they were being disingenuous.

anal_reactor 2 days ago

> The anthropization of llms is getting off the charts.

What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.

I honestly believe there is a degree of sentience in LLMs. Sure, they're not sentient in the human sense, but if you define sentience as whatever humans have, then of course no other entity can be sentient.

  • noosphr 2 days ago

    >What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.

    To simulate a biological neuron you need a 1m parameter neural network.

    The sota models that we know the size of are ~650m parameters.

    That's the equivalent of a round worm.

    So if it quacks like a duck, has the brain power of a round worm, and can't walk then it's probably not a duck.

    • ffsm8 2 days ago

      You just convinced me that AGI is a lot closer then I previously thought, considering the bulk of our brains job is controlling our bodies and responding to the stimulus from our senses - not thinking, talking, planning, coding etc

      • noosphr 2 days ago

        A stegosaurus managed to live using a brain the size of a wallnut on top of a body the size of a large boat. The majority of our brains are doing something else.

    • anal_reactor 2 days ago

      Ok so you're saying that the technology to make AI truly sentient is there, we just need a little bit more computational power or some optimization tricks. Like raytracing wasn't possible in 1970 but is now. Neat.

      • noosphr 2 days ago

        Yes, in the same way that a human is an optimization of a round worm.