Comment by AbrahamParangi

Comment by AbrahamParangi 3 hours ago

23 replies

This is comical because we used to have something called the turing test which we considered our test of human-level intelligence. We never talk about it now because we obviously blew past it years ago.

There are some interesting ways in which AI remains inferior to human intelligence but it is also obviously already superior in many ways already.

It remains remarkable to me how common denial is when it comes to what AI can or cannot actually do.

teiferer 3 hours ago

There are also some interesting ways in which bicycles remain inferior to human locomotion but they are also obviously already superior in many ways already.

Still doesn't mean we should gamble the economies of whole continents on bike factories.

lm28469 3 hours ago

I'm half joking but people who can't tell which side of a chat is an LLM aren't conscious

  • Insanity 2 hours ago

    You are absolutely right!

    But common patterns of LLMs today will become adopted by humans as we are influenced linguistically by our interactions - which then makes it harder to detect LLM output.

  • AbrahamParangi an hour ago

    This is an artifact of RLHF and far better human facsimiles are trivial with uncensored / jailbroken models.

nixpulvis 3 hours ago

I think it's that the issues are still so prevalent that people will justify poor arguments and reasons for being skeptical, because it matches their feelings, and articulating the actual problem is harder.

  • ants_everywhere 3 hours ago

    It's exactly the same as the literal Luddites, synthesizers, cameras, etc. The actual concern is economic: people don't want to be replaced.

    But the arguments are couched in moral or quality terms for sympathy. Machine-knitted textiles are inferior to hand-made textiles. Synthesizers are inferior to live orchestras. Daguerreotypes are inferior to hand-painted portraits.

    It's a form of intellectual insincerity, but it happens predictably with every major technological advance because people are scared.

    • nixpulvis 2 hours ago

      I don't completely disagree. But it's incorrect to claim that there's nothing but fear of losing jobs at the heart of the AI concern.

      I think a lot of people like myself are concerned with how dependent we are becoming so quickly on something with limited accuracy and accountability.

      • ants_everywhere 2 hours ago

        Would your concerns be lessened or heightened if AI was more accurate? The doomsday scenario was always a highly competent AI like Skynet.

        • nixpulvis 2 hours ago

          I think it would ease some of my concerns, but wouldn't make me in the camp that believes it should be raced to without thinking about how to control it and plans in place to both identify and react to it's risks.

          There are two doomsdays. The dramatic one where they control the military and we end up living in the matrix. And the less dramatic, where we as human forget how to do things for ourselves and then slowly watch the AIs become less and less capable of keeping us happy and alive. Maybe in the end of both scenarios it's similar but one would take decades, while the other could happen overnight.

          Accuracy alone doesn't fix either doomsday scenario. But it would slow some of the issues I see forming already with people replacing research skills and informational reporting with AIs that can lie or be very misleading.

mjdv 3 hours ago

> We never talk about it now because we obviously blew past it years ago.

It's shocking to me that (as far as I know) no one has actually bothered to do a real Turing test with the best and newest LLMs. The Turing test is not whether a casual user can be momentarily confused about whether they are talking to a real person, or if a model can generate real-looking pieces of text. It's about a person seriously trying, for a fair amount of time, to distinguish between a chat they are having with another real person and an AI.

Q: Do you play chess? A: Yes. Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play? A: (After a pause of 15 seconds) R-R8 mate.

debugnik 3 hours ago

Try reading Turing's thesis before making that assertion, because the imitation game wasn't meant to measure a tipping point of any kind.

It's just a thought experiment to show that machines achieving human capabilities isn't proof that machines "think", then he argues against multiple interpretations of what machines "thinking" does even mean, to conclude that whether machines think or not is not worth discussing and their capabilities are what matters.

That is, the test has nothing to do with whether machines can reach human capabilities in the first place. Turing took for granted they eventually would.

zahlman 2 hours ago

> This is comical because we used to have something called the turing test

It didn't go anywhere.

> which we considered our test of human-level intelligence.

No, this is a strawman. Turing explicitly posits that the question "can machines think?" is ill-posed in the first place, and proposes the "imitation game" as something that can be studied meaningfully — without ascribing to it the sort of meaning commonly described in these arguments.

More precisely:

> The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

----

> We never talk about it now because we obviously blew past it years ago.

No. We talk about it constantly, because AI proponents keep bringing it up fallaciously. Nothing like "obviously blowing past it years ago" actually happened; cited examples look nothing like the test actually described in Turing's paper. But this is still beside the point

> There are some interesting ways in which AI remains inferior to human intelligence but it is also obviously already superior in many ways already.

Computers were already obviously superior to humans in, for example, arithmetic, decades ago.

> It remains remarkable to me how common denial is when it comes to what AI can or cannot actually do.

It is not "denial" to point out your factual inaccuracies.

vasco 3 hours ago

> We never talk about it now because we obviously blew past it years ago.

My Turing test has been the same since about when I learned it existed. I told myself I'd always use the same one.

What I do is after saying Hi, I will repeat the same sentence forever.

A human still reacts very differently than any machine to this test. Current AIs could be adversarially prompted to bypass this maybe, but so far it's still obvious its a machine replying.

  • paradite 3 hours ago

    What would you expect a human to reply?

    And after you have answered that question. Try Claude Sonnet 4.5.

    What is Claude Sonnet 4.5's reply?

    • malfist 2 hours ago

      Is this an ad for Claude Sonnet 4.5?

      • tremon an hour ago

        No, this is Claude Sonnet 4.5 recalibrating its response.

    • throwaway91827 2 hours ago

      I decided to put this to the test.

      What I would expect a human to reply:

      "Um... OK?"

      What Claude Sonnet 4.5 replied:

      "Hi there! I understand you're planning to repeat the same sentence. I'm here whenever you'd like to have a conversation about something else or if you change your mind. Feel free to share whatever's on your mind!"

      I don't think I've ever imagined a human saying "I understand you're planning to repeat the same sentence", if you thought this was some kind of killer rebuke, I don't think it worked out the way you imagined- do you actually think that's a human-sounding response? To me it's got that same telltale sycophancy of a robot butler that I've come to expect from these consumer grade LLMs.

      • paradite an hour ago

        That's mostly because of the system prompt asking Claude to be a helpful assistant.

        If you try with a human who works in a call center with that system prompt as instructions on how to answer calls, you will likely get a similar response.

        But honestly, believe in whatever you wanna believe. I'm so sick of arguing with people online. Not gonna waste my time here anymore.

        • vasco 19 minutes ago

          Maybe don't take such a maximalist interpretation of other people's comments, my point that it doesn't pass that test doesn’t mean it isn't extremely useful for many things. It's just that the test is undefined so I find it funny people say they truly cannot tell it's not a real person. I could've been more crass and said it also doesn’t reply to insults like a real person. There's so many ways in which it doesn't behave like a human, but it's still pretty useful.

          What I read from your reply is that you adjoin the above statement with "and therefore they are useless" but there's no need to read it like that.

ReptileMan 2 hours ago

>obviously already superior in many ways already.

And yet you didn't bother to provide a single obvious example.