Comment by random3

Comment by random3 2 days ago

66 replies

Just like they "know" English. "know" is quite an anthropomorphization. As long as an LLM will be able to describe what an evaluation is (why wouldn't it?) there's a reasonable expectation to distinguish/recognize/match patterns for evaluations. But to say they "know" is plenty of (unnecessary) steps ahead.

sidewndr46 2 days ago

This was my thought as well when I read this. Using the word 'know' implies an LLM has cognition, which is a pretty huge claim just on its own.

  • gameman144 2 days ago

    Does it though? I feel like there's a whole epistemological debate to be had, but if someone says "My toaster knows when the bread is burning", I don't think it's implying that there's cognition there.

    Or as a more direct comparison, with the VW emissions scandal, saying "Cars know when they're being tested" was part of the discussion, but didn't imply intelligence or anything.

    I think "know" is just a shorthand term here (though admittedly the fact that we're discussing AI does leave a lot more room for reading into it.)

    • lamename 2 days ago

      I agree with your point except for scientific papers. Let's push ourselves to use precise, non-shorthand or hand waving in technical papers and publications, yes? If not there, of all places, then where?

      • fenomas 2 days ago

        "Know" doesn't have any rigorous precisely-defined senses to be used! Asking for it not to be used colloquially is the same as asking for it never to be used at all.

        I mean - people have been saying stuff like "grep knows whether it's writing to stdout" for decades. In the context of talking about computer programs, that usage for "know" is the established/only usage, so it's hard to imagine any typical HN reader seeing TFA's title and interpreting it as an epistemological claim. Rather, it seems to me that the people suggesting "know" mustn't be used about LLMs because epistemology are the ones departing from standard usage.

    • viccis 2 days ago

      I think you should be more precise and avoid anthropomorphism when talking about gen AI, as anthropomorphism leads to a lot of shaky epistemological assumptions. Your car example didn't imply intelligence, but we're talking about a technology that people misguidedly treat as though it is real intelligence.

      • exe34 2 days ago

        What does "real intelligence" mean? I fear that any discussion that starts with the assumption such a thing exists will only end up as "oh only carbon based humans (or animals if you happen to be generous) have it".

    • bediger4000 2 days ago

      The toaster thing is more as admission that the speaker doesn't know what the toaster does to limit charring the bread. Toasters with timers, thermometers and light sensors all exist. None of them "know" anything.

      • gameman144 2 days ago

        Yeah, I agree, but I think that's true all the way up the chain -- just like everything's magic until you know how it works, we may say things "know" information until we understand the deterministic machinery they're using behind the scenes.

        • timschmidt 2 days ago

          I'm in the same camp, with the addition that I believe it applies to us as well since we're part of the system too, and to societies and ecologies further up the scale.

bradley13 2 days ago

But do you know what it means to know?

I'm only being slightly sarcastic. Sentience is a scale. A worm has less than a mouse, a mouse has less than a dog, and a dog less than a human.

Sure, we can reset LLMs at will, but give them memory and continuity, and they definitely do not score zero on the sentience scale.

  • ofjcihen 2 days ago

    If I set an LLM in a room by itself what does it do?

    • bradley13 2 days ago

      Is the LLM allowed to do anything without prompting? Or is it effectively disabled? This is more a question of the setup than of sentience.

    • mewpmewp2 2 days ago

      What tools do you give it? E.g. would you put a GPU there that has LLM loaded into it and it is triggering itself in a loop?

    • abrookewood 2 days ago

      Yes, that's my fall back as well. If it receives zero instructions, will it take any action?

      • nhod 2 days ago

        Helen Keller famously said that before she had language (the first word of which was “water”) she had nothing, a void, and the minute she had language, “the whole world came rushing in.”

        Perhaps we are not so very different?

    • rcxdude 2 days ago

      Does this have anything to do with intelligence or awareness?

  • DougN7 2 days ago

    It probably scores about the same as a calculator, which I’d say is zero.

downboots 2 days ago

Communication is to vibration as knowledge is to resonance (?). From the sound of one hand clapping to the secret name of Ra.

unparagoned 2 days ago

I think people are overpromorphazing humans. What's does it mean for a human to "know" they are seeing "Halle Berry". Well it's just a single neuron being active.

"Single-Cell Recognition: A Halle Berry Brain Cell" https://www.caltech.edu/about/news/single-cell-recognition-h...

It seems like people are giving attributes and powers to humans that just don't exist.

  • exe34 2 days ago

    overpomorphization sounds slightly better than I used to say: "anthropomorphizing humans". The act of ascribing magical faculties that are reserved for imagined humans to real humans.

cluckindan 2 days ago

(sees FSV UI on computer screen)

"It's a UNIX system! I know this!"

scotty79 2 days ago

The app knows your name. Not sure why people who see llms as just yet another app suddenly get antsy about colloquialism.

blackoil 2 days ago

If it talks like duck and walks like duck...

[removed] 2 days ago
[deleted]
golemotron 2 days ago

If you know enough cognitive science, you have a choice. You either say that they "know" or that humans don't.

It's like the critique "it's only matching patterns." Wait until you realize how the brain works.

ninetyninenine 2 days ago

[flagged]

  • random3 2 days ago

    "Knowing" needs not exist outside of human invention. In fact that's the point - it only matters in relation to humans. You can choose whatever definition you want, but the reality is that, once you chose a non-standard definition the argument becomes meaningless outside of the scope of your definition.

    There are two angles and this context fails both

    - One about what is "knowing" - the definition. - The other about what are the instances of "knowing"

    first - knowign implies awarness, perception, etc. It's not that this couldn't be moodeled with some flexibility around lower level definitions. However LLMs and GPTs in particular are not it. Pre-trainign is not it.

    second - intended use of the word "knowing". The reality is "knowing" is used with the actual meaning of awarness, cognition, etc. And once you revert/extend the meaning to practically nothing - what is knowing? Then the database know, wikipedia knows - the initial argument (of the paper) is diminished - it knows it's an eval is useless as a statement.

    So IMO the argument of the paper should stand on its feet with the minimum amount of additional implications (Occam's razor). Does the statement that a LLM can detect an evalution pattern need to depend that it has self-awarness and feels pain? That wouldn't make much sense. So then don't say "know" which comes with these implications. Like "my ca 'knows' I'm in a hurry and will choke and die"

    • ninetyninenine 2 days ago

      >"Knowing" needs not exist outside of human invention. In fact that's the point

      It doesn't need to, I never said it needed to. That is my point. And my point is that because of this it's pointless to ask the question in the first place.

      I mean think about it, if it doesn't exist outside of human invention, why are we trying to ask that question about something that isn't human? An LLM?

  • devmor 2 days ago

    Words have definitions for a reason. It is important to define concepts and exclude things from that definition that do not match.

    No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.

    • timschmidt 2 days ago

      > No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.

      You sound awful certain that's not functionally equivalent to what neurons are doing. But there's a long history of experimentation, observation, and cross-pollination as fundamental biological research and ML research have informed each other.

      • devmor 2 days ago

        A long history of researching and understanding photosynthesis went into developing and maximizing the efficiency of solar panels. Both produce energy from sunlight.

        But they are not the same thing and have meaningfully different uses, even if from a casual observer they appear to serve the same function.

        • timschmidt 2 days ago

          > A long history of researching and understanding photosynthesis went into developing and maximizing the efficiency of solar panels.

          I don't think that's accurate. Some of the very first semiconductors were observed to exhibit the photoelectric effect. Nowhere in https://en.wikipedia.org/wiki/Solar_cell#Research_in_solar_c... will you find mention of chloroplasts. Optimizing solar cells has mostly been a materials science problem.

          https://en.wikipedia.org/wiki/Bio-inspired_computing on the other hand "trace[es] back to 1936 and the first description of an abstract computer" and we have literally dissected, probed, and measured countless neurons in the course of attempting to figure out how they work to replicate them within the computer.

    • lostmsu 2 days ago

      > to have information in your mind as a result of experience or because you have learned or been told it

    • hatthew 2 days ago

      What does the word "know" mean, then?

      • ninetyninenine 2 days ago

        Not only can he not give a definition that is universally agreed upon. He doesn't even know how LLMs or humans brains work. These are both black boxes... and nobody knows how either works. Anybody who makes a claim that they "know" essentially doesn't "know" what they're talking about.