Comment by random3

Comment by random3 2 days ago

1 reply

"Knowing" needs not exist outside of human invention. In fact that's the point - it only matters in relation to humans. You can choose whatever definition you want, but the reality is that, once you chose a non-standard definition the argument becomes meaningless outside of the scope of your definition.

There are two angles and this context fails both

- One about what is "knowing" - the definition. - The other about what are the instances of "knowing"

first - knowign implies awarness, perception, etc. It's not that this couldn't be moodeled with some flexibility around lower level definitions. However LLMs and GPTs in particular are not it. Pre-trainign is not it.

second - intended use of the word "knowing". The reality is "knowing" is used with the actual meaning of awarness, cognition, etc. And once you revert/extend the meaning to practically nothing - what is knowing? Then the database know, wikipedia knows - the initial argument (of the paper) is diminished - it knows it's an eval is useless as a statement.

So IMO the argument of the paper should stand on its feet with the minimum amount of additional implications (Occam's razor). Does the statement that a LLM can detect an evalution pattern need to depend that it has self-awarness and feels pain? That wouldn't make much sense. So then don't say "know" which comes with these implications. Like "my ca 'knows' I'm in a hurry and will choke and die"

ninetyninenine 2 days ago

>"Knowing" needs not exist outside of human invention. In fact that's the point

It doesn't need to, I never said it needed to. That is my point. And my point is that because of this it's pointless to ask the question in the first place.

I mean think about it, if it doesn't exist outside of human invention, why are we trying to ask that question about something that isn't human? An LLM?