Comment by magarnicle
Comment by magarnicle 3 days ago
But the top of their game includes them make things up and getting things wrong. They always give their best, but they always include mistakes. It's a different trust proposition to a human.
Comment by magarnicle 3 days ago
But the top of their game includes them make things up and getting things wrong. They always give their best, but they always include mistakes. It's a different trust proposition to a human.
All doctors make things up and get things wrong occasionally. The less experienced and more overworked they are, the more often this happens.
Again, LLMs aren't competing with the best human doctors. They're competing with doctors you actually have access to.
A real, actual doctor told my brother, who has a chronic headache disorder, to just keep taking OTC painkillers.
You very specifically should not do that; you'll develop a medication overuse headache and be worse off than you were.
It gets worse, though. I was able to ask them a few questions about their symptoms, compare them to entries in the International Classification of Headache Disorders, and narrow it down to, iirc, two likely possibilities.
One of them was treatable. The treatment works. They still have pain, but can do stuff.
An AI that makes stuff up and gets stuff wrong isn't any different from the doctors we already have, except you can afford to get a second opinion, and you have the time available to push back and ask questions.
Edit: to expound on quality of the doctor - diagnosis and proposing a treatment was the work of several hours for me, a layman. A doctor should have known the ICHD existed. They should have been able to, in several minutes, ask questions about symptoms, reference the ICHD to narrow down likely diagnoses, and then propose a treatment with a "come back if that doesn't help".