Comment by CamperBob2

Comment by CamperBob2 2 days ago

1 reply

Why cannot the semantic understanding be implicitly encoded in the model?

That just turns the question into "OK, so what distinguishes the model from a machine capable of genuine understanding and reasoning, then?"

At some point you (and Searle) must explain what the difference is in engineering terms, not through analogy or by appeals to ensoulment or by redecorating the Chinese Room with furnishings it wasn't originally equipped with. Having moved the goalpost back to the far corner of the parking garage already, what's your next move?

It's easy to dismiss a "stochastic parrot" by saying that "The next token is a function of all of the previous ones," but welcome to our deterministic universe, I guess... deterministic, that is, apart from the randomness imparted by SGD or thermal noise or what-have-you. Again, how is this different from what human brains do? Von Neumann himself naturally assumed that stored-program machines would be modeled on networks of neuron-like structures (a factoid I just ran across while reading about McCullough and Pitts), so it's not that surprising that we're finally catching up to his way of looking at it.

At the end of the day we're all just bags of meat trying to minimize our own loss functions. There's nothing special about what we're doing. The magical thinking you're referring to is being done by those who claim "AI isn't doing X" or "AI will never do X" without bothering to define X clearly.

I don't think it needs to be anything else to achieve the results it achieves.

Exactly, and that's earth-shaking because of the potential it has to illuminate the connection between brains and minds. It's sad that the discussion inevitably devolves into analogies to monkeys and parrots.

danparsonson a day ago

> That just turns the question into "OK, so what distinguishes the model from a machine capable of genuine understanding and reasoning, then?"

And that's a great question which is not far away from asking for definitions of intelligence and consciousness, which of course I don't have, however I could venture some suggestions about what we have that LLMs don't, in no particular order:

- Self-direction: we are goal-oriented creatures that will think and act without any specific outside stimulus

- Intentionality: related to the above - we can set specific goals and then orient our efforts to achieve them, sometimes across decades

- Introspection: without guidance, we can choose to reconsider our thoughts and actions, and update our own 'models' by deliberately learning new facts and skills - we can recognise or be given to understand when we're wrong about something, and can take steps to fix that (or choose to double down on it)

- Long term episodic memory: we can recall specific facts and events with varying levels of precision, and correlate those memories with our current experiences to inform our actions

- Physicality: we are not just brains in skulls, but flooded with all manner of chemicals that we synthesise to drive our biological functions, and which affect our decision making processes; we are also embedded in the real physical world and recieving huge amounts of sensory data almost constantly

> At some point you (and Searle) must explain what the difference is in engineering terms, not through analogy or by appeals to ensoulment or by redecorating the Chinese Room with furnishings it wasn't originally equipped with. Having moved the goalpost back to the far corner of the parking garage already, what's your next move?

While I think that's a fair comment, I have to push back a bit and say that if I could give you a satisfying answer to that, then I may well be defining intelligence or consciousness and as far as I know there are no accepted definitions for those things. One theory I like is Douglas Hofstadter's strange loop - the idea of a mind thinking about thinking about thinking about itself, thus making introspection a primary pillar of 'higher mental functions'. I don't see any evidence of LLMs doing that, nor any need to invoke it.

> It's easy to dismiss a "stochastic parrot" by saying that "The next token is a function of all of the previous ones," but welcome to our deterministic universe, I guess... deterministic, that is, apart from the randomness imparted by SGD or thermal noise or what-have-you. Again, how is this different from what human brains do?

...and now we're onto the existence or not of free will... Perhaps it's the difference between automatic actions and conscious choices? My feeling is that LLMs deliberately or accidentally model a key component of our minds, the faculty of pattern matching and recall, and I can well imagine that in some future time we will integrate an LLM into a wider framework that includes other abilities that I listed above, such as long term memory, and then we may yet see AGI. Side note that I'm very happy to accept the idea that each of us encodes our own parrot.

> Von Neumann himself naturally assumed that stored-program machines would be modeled on networks of neuron-like structures (a factoid I just ran across while reading about McCullough and Pitts), so it's not that surprising that we're finally catching up to his way of looking at it.

Well OK but very smart people in the past thought all kinds of things that didn't pan out, so I'm not really sure that helps us much.

> At the end of the day we're all just bags of meat trying to minimize our own loss functions. There's nothing special about what we're doing. The magical thinking you're referring to is being done by those who claim "AI isn't doing X" or "AI will never do X" without bothering to define X clearly.

I don't see how that's magical thinking, it's more like... hard-nosed determinism? I'm interested in the bare minimum necessary to explain the phenomena on display, and expressing those phenomena in straightforward terms to keep the discussion grounded. "AI isn't doing X" is a response to those saying that AI is doing X, so it's as much on those people to define what X is; in any case I rather prefer "AI is only doing Y", where Y is a more boring and easily definable thing that nonetheless explains what we're seeing.

> Exactly, and that's earth-shaking because of the potential it has to illuminate the connection between brains and minds.

Ah! Now there we agree entirely. Actually I think a far more consequential question than "what do LLMs have that makes them so good?" is "what don't we have that we thought we did?".... but perhaps that's because I'm an introspecting meat bag and therefore selfishly fascinated by how and why meat bags introspect.