Comment by jfdjkfdhjds

Comment by jfdjkfdhjds 3 days ago

2 replies

yall nerd spniping the example and missing the point that ofered it.

the elevator example, the poster was giving chatbots the same excuse for mistakes as a person.

imagine if elevators could just make mistakes and damage people, because well, a human would too, never minda that its very much trivial to design elevators with sensors in the correct place once and then they are accident free! this is the ridiculous world ai apologists must rely on...

space_fountain 2 days ago

I'm playing a bit of both sides here. I do think it's interesting that we so automatically feel like the cases are different. I used something old because I think we understand it well and I do think in the elevator case our instincts are pretty justified. The fact that we can add sensors and get near 100% reliability is a big part of why in that case it isn't very reasonable, but ML is statistical. It's not the kind of thing that you fix by adding one more sensor or one more if statement. I think some anti ML people use that to mean it's unworkable, but I'd hate to hold off on replacing drivers for example just because the kind of errors that a robo taxi makes feel more like in theory they would have been avoidable with better training while we just go and forgive drivers for letting their mind wander for a second

  • TeMPOraL 2 days ago

    Everything is statistical. The explicitly defined systems are understandable and understood, but can also be brittle[0]; they do make it easier to put probabilities on failure scenarios, but those probabilities are never 0. ML systems are more like real people. They're unpredictable and prone to failures, fixing any one of which often creating a problem elsewhere - but with enough fixing, you can push the probability of failure down to a number low enough that you no longer care.

    Compare: probabilistic methods of primality checking (which is where I first understood this idea). Theoretically, they can give you the wrong result sometimes; in practice, they're constructed in such a way that you can push the probability of error to arbitrarily low levels.

    See also: random UUIDs, hashing algorithms - all are prone to collisions, but have knobs you can turn to push the probability of collision to somewhere around "not before heat death of the universe" or thereabouts.

    This is the kind of approach we'll need with ML methods: accepting they can be randomly wrong, but developing them in ways that allow us to control the probability of error.

    --

    [0] - In theory, you can ensure your operating envelope is large enough to cover pretty much anything that could reasonably go wrong; in practice, having a clear-cut operating envelope also creates a pressure to shrink it to save money (it can be a lot of money), which involves eroding what "reasonably" means.