Comment by danparsonson

Comment by danparsonson 3 days ago

7 replies

Partly because "artificial intelligence" is a loaded phrase which brings implications of AGI along for the ride, partly because "intelligence" is not a well defined term, so an artificial version of it could be argued to be almost anything, and partly because even if you lean on the colloquial understanding of what "intelligence" is, ChatGPT (and its friends) still isn't it. It's a Chinese Room - or a stochastic parrot.

CamperBob2 3 days ago

It's a Chinese Room - or a stochastic parrot.

Show me a resident of a Chinese Room who can do this: https://chatgpt.com/share/66e83ff0-76b4-800b-b33b-910d267a75...

The Chinese Room metaphor was always beneath Searle's intellectual level of play, and it hasn't exactly gotten more insightful with age.

  • danparsonson 3 days ago

    I understand and agree that ChatGPT achieves impressive results but your appeal to incredulity doesn't make it anything more than it is I'm afraid.

    • CamperBob2 3 days ago

      It's not incredulity, just pointing out the obvious. Searle placed very specific limitations on the operator of the Room. He rests his whole argument on the premise that the operator is illiterate in Chinese, or at least has no access to the semantics of the material stored in the Room. That's plainly not the case with ChatGPT, or it couldn't review its previous answers to find and fix its mistakes.

      And you certainly would not get a different response, much less a better one, from the operator of a Chinese Room simply by adding "Think carefully step by step" to the request you hand him.

      It's just a vacuous argument from square one, and it annoys me to an entirely-unreasonable extent every time someone brings it up. Add it to my "Stochastic Parrot" and "Infinite Monkeys" trigger phrases, I guess.

      • danparsonson 2 days ago

        > ... He rests his whole argument on the premise that the operator is illiterate in Chinese, or at least has no access to the semantics of the material stored in the Room.

        ...and yet outputs semantically correct responses.

        > That's plainly not the case with ChatGPT, or it couldn't review its previous answers to find and fix its mistakes.

        Which is another way of saying, ChatGPT couldn't produce semantically correct output without understanding the input. Disagreeing with which is the whole point of the Chinese Room argument.

        Why cannot the semantic understanding be implicitly encoded in the model? That is, why cannot the program I (as the Chinese Room automaton) am following be of sufficient complexity that my output appears to be that of an intelligent being with semantic understanding and the ability to review my answers? That, in my understanding, is where the genius of ChatGPT lies - it's a masterpiece of preprocessing and information encoding. I don't think it needs to be anything else to achieve the results it achieves.

        A different example of this is the work of Yusuke Endoh, whom you may know for his famous quines. https://esoteric.codes/blog/the-128-language-quine-relay is to me one of the most astonishing feats of software engineering I've ever seen, and little short of magic - but at its heart it's 'just' very clever encoding. Each subsequent program understands nothing and yet encodes every subsequent program including itself. Another example is DNA; how on Earth does a dumb molecule create a body plan? I'm sure there are lots of examples of systems that exhibit such apparently intelligent and subtly discriminative behaviour entirely automatically. Ant colonies!

        > And you certainly would not get a different response, much less a better one, from the operator of a Chinese Room simply by adding "Think carefully step by step" to the request you hand him.

        Again, why not? It has access to everything that has gone before; the next token is f(all the previous ones). As for asking it to "think carefully", would you feel differently if the magic phrase was "octopus lemon wheat door handle"? Because it doesn't matter what the words mean to a human - it's just responding to the symbols it's been fed; the fact that you type something meaningful to you just obscures that fact and lends subconscious credence to the idea that it understands you.

        > It's just a vacuous argument from square one, and it annoys me to an entirely-unreasonable extent every time someone brings it up. Add it to my "Stochastic Parrot" and "Infinite Monkeys" trigger phrases, I guess.

        With no intent to annoy, I hope you at least understand where I'm coming from, and why I think those labels are not just apt, but useful ways to dispel the magical thinking that some (not you specifically) exhibit when discussing these things. We're engineers and scientists and although it's fine to dream, I think it's also fine to continue trying to shoot down the balloons that we send up, so we're not blinded by the miracle of flight.

aurareturn 3 days ago

Do people really associate AI with AGI?

Because we've been using "AI" to describe things many years before AGI became mainstream. Companies used to use "AI" to describe basic ML algorithms.

When I see "AI", I just think it's some sort of NL or ML. I never think it's AGI. AGI is AGI.