Comment by repiret

Comment by repiret 3 days ago

4 replies

I'm reminded of the monologue from Terminator 2:

> Watching John with the machine, it was suddenly so clear. The Terminator would never stop, it would never leave him... it would always be there. And it would never hurt him, never shout at him or get drunk and hit him, or say it couldn't spend time with him because it was too busy. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up. In an insane world, it was the sanest choice.

The AI doctor will always have enough time for you, and always be at the top of their game with you. It becomes useful when it works better than an overworked midlevel, not when it competes with the best doctor on their best day. If we're not there already, we're darn close.

D7E0119908C212 2 days ago

What is interesting about the decision of the Terminator not to continue this role as a father-figure for John (aside from the requirement to destroy it's embedded Skynet technology), was that it explicitly understood that while it could and would do all those things to protect him, it lacked the emotional intelligence needed to provide a supportive development environment for an child/adolescent.

Specifically:

> I know now why you cry, but it's something I can never do.

While the machine learns what this complex social behavior called 'crying' is, it also learns that it is unable to ever actualize this; it can never genuinely care for John, any relationship would be a simulation of emotions. In the context of a child learning these complex social interactions, having a father-figure who you knew wasn't actually happy to see you succeed, sad to see you cry ...

magarnicle 3 days ago

But the top of their game includes them make things up and getting things wrong. They always give their best, but they always include mistakes. It's a different trust proposition to a human.

  • terribleperson 2 days ago

    A real, actual doctor told my brother, who has a chronic headache disorder, to just keep taking OTC painkillers.

    You very specifically should not do that; you'll develop a medication overuse headache and be worse off than you were.

    It gets worse, though. I was able to ask them a few questions about their symptoms, compare them to entries in the International Classification of Headache Disorders, and narrow it down to, iirc, two likely possibilities.

    One of them was treatable. The treatment works. They still have pain, but can do stuff.

    An AI that makes stuff up and gets stuff wrong isn't any different from the doctors we already have, except you can afford to get a second opinion, and you have the time available to push back and ask questions.

    Edit: to expound on quality of the doctor - diagnosis and proposing a treatment was the work of several hours for me, a layman. A doctor should have known the ICHD existed. They should have been able to, in several minutes, ask questions about symptoms, reference the ICHD to narrow down likely diagnoses, and then propose a treatment with a "come back if that doesn't help".

  • TeMPOraL 2 days ago

    All doctors make things up and get things wrong occasionally. The less experienced and more overworked they are, the more often this happens.

    Again, LLMs aren't competing with the best human doctors. They're competing with doctors you actually have access to.