Comment by ninetyninenine
Comment by ninetyninenine 2 days ago
[flagged]
Comment by ninetyninenine 2 days ago
[flagged]
>"Knowing" needs not exist outside of human invention. In fact that's the point
It doesn't need to, I never said it needed to. That is my point. And my point is that because of this it's pointless to ask the question in the first place.
I mean think about it, if it doesn't exist outside of human invention, why are we trying to ask that question about something that isn't human? An LLM?
Words have definitions for a reason. It is important to define concepts and exclude things from that definition that do not match.
No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.
> No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.
You sound awful certain that's not functionally equivalent to what neurons are doing. But there's a long history of experimentation, observation, and cross-pollination as fundamental biological research and ML research have informed each other.
A long history of researching and understanding photosynthesis went into developing and maximizing the efficiency of solar panels. Both produce energy from sunlight.
But they are not the same thing and have meaningfully different uses, even if from a casual observer they appear to serve the same function.
> A long history of researching and understanding photosynthesis went into developing and maximizing the efficiency of solar panels.
I don't think that's accurate. Some of the very first semiconductors were observed to exhibit the photoelectric effect. Nowhere in https://en.wikipedia.org/wiki/Solar_cell#Research_in_solar_c... will you find mention of chloroplasts. Optimizing solar cells has mostly been a materials science problem.
https://en.wikipedia.org/wiki/Bio-inspired_computing on the other hand "trace[es] back to 1936 and the first description of an abstract computer" and we have literally dissected, probed, and measured countless neurons in the course of attempting to figure out how they work to replicate them within the computer.
Not only can he not give a definition that is universally agreed upon. He doesn't even know how LLMs or humans brains work. These are both black boxes... and nobody knows how either works. Anybody who makes a claim that they "know" essentially doesn't "know" what they're talking about.
"Knowing" needs not exist outside of human invention. In fact that's the point - it only matters in relation to humans. You can choose whatever definition you want, but the reality is that, once you chose a non-standard definition the argument becomes meaningless outside of the scope of your definition.
There are two angles and this context fails both
- One about what is "knowing" - the definition. - The other about what are the instances of "knowing"
first - knowign implies awarness, perception, etc. It's not that this couldn't be moodeled with some flexibility around lower level definitions. However LLMs and GPTs in particular are not it. Pre-trainign is not it.
second - intended use of the word "knowing". The reality is "knowing" is used with the actual meaning of awarness, cognition, etc. And once you revert/extend the meaning to practically nothing - what is knowing? Then the database know, wikipedia knows - the initial argument (of the paper) is diminished - it knows it's an eval is useless as a statement.
So IMO the argument of the paper should stand on its feet with the minimum amount of additional implications (Occam's razor). Does the statement that a LLM can detect an evalution pattern need to depend that it has self-awarness and feels pain? That wouldn't make much sense. So then don't say "know" which comes with these implications. Like "my ca 'knows' I'm in a hurry and will choke and die"