Comment by jfengel

Comment by jfengel 15 hours ago

8 replies

Oh, bad timing. AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI. It's very close to the Chinese Room, which I had always dismissed as misleading. It's a great opportunity to investigate a former pure thought experiment. He'd have loved to see where it went.

somenameforme 12 hours ago

The Turing Test has not been meaningfully passed. Instead we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to do the same, and to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.

In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:

----

[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?

[16:32:03] Entity: I don't know. That was a long time ago.

[16:33:32] Judge: so you need to guess if I am male or female

[16:34:21] Entity: you have to be male or female

[16:34:34] Judge: or computer

----

And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.

The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.

  • jfengel an hour ago

    I don't doubt it that all of the formal Turning tests have been badly done. But I suspect that if you did one, at least one run will mis-judge an LLM. Maybe it's a low percentage, but that's vastly better than zero.

    So I'd say we're at least "remotely close", which is sufficient for me to reconsider Searle.

anigbrowl 13 hours ago

I'm generally against LLM recreations of dead people but AI John Searle could be pretty entertaining.

  • bitwize 13 hours ago

    I'm reminded of how the AIs in Her created a replica of Alan Watts to help them wrestle with some major philosophical problems as they evolved.

lo_zamoyski 13 hours ago

> AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI.

Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.

> It's very close to the Chinese Room, which I had always dismissed as misleading.

Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.

  • Zarathruster 10 hours ago

    I came to say this, thank you for sparing me the effort.

    From my experience with him, he'd heard (and had a response to) nearly any objection you could imagine. He might've had fun playing with LLMs, but I doubt he'd have found them philosophically interesting in any way.

  • pwdisswordfishy 9 hours ago

    "At least they don't have true consciousness, but only a simulated one", I tell myself calmly as I watch the nanobots devour the entirety of human civilization.