Comment by AdieuToLogic

Comment by AdieuToLogic 8 hours ago

1 reply

> So your silly demand going unmet proves nothing.

I made demands of no one.

> Also, "give me an example please" is not a strawman!

My identification of the strawman was that it referenced "find something hard" when I had said "be the hero they want" and that what is needed in this specific problem domain may be more difficult than what a generalization addresses.

> If you actually want to prove something, you need to show at least one document in the set that a human can do but not a machine, or to really make a good point you need to show that a non-neglibile fraction fit that description.

Maybe this is the proof you demand.

LLM's are statistical prediction algorithms. As such, they are nondeterministic and, therefore, provide no guarantees as to the correctness of their output.

The National Archives have specific artifacts requiring precise textual data extraction.

Use of nondeterministic tools known to produce provably incorrect results eliminate their applicability in this workflow due to all of their output requiring human review. This is an unnecessary step and can be eliminated by the human reading the original text themself.

Does that satisfy your demand?

Dylan16807 7 hours ago

> I made demands of no one.

Whatever you want to call "If it's that easy, then do it"

> LLM's [...] Does that satisfy your demand?

That's a different argument from the one above where you were trying to contradict tptacek. And that argument is flawed itself. In particular, humans don't have guarantees either.

> provably incorrect results

This gets back to the actual request from earlier, which is showing an example where the machine performs below some human standard. Just pointing out that LLMs make mistakes is not enough proof of incorrectness in this specific use case.