Comment by simonw
> "verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!)"
100% this.
> "verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!)"
100% this.
Totally. And yet rigorous proof is very difficult. Having done some mathematics involving nontrivial proofs, I respect even more how difficult rigor is.
Ah, I absolutely don't verify code in the mathematical sense of the word. More like utilize strong static typing (or hints / linters in weaker typed languages) and write a lot of tests.
Nothing is truly 100% safe or free of bugs. What I meant with my comment up-thread was that I have enough experience to have a fairly quick and critical eye of code, and that has saved my skin many times.
How did you get there from me agreeing 100% with someone who said that you should be ready to verify everything an LLM does for you and if you're not willing to do that you shouldn't use them at all?
Do you ever read my comments, or do you just imagine what I might have said and reply to that?
There's simply no way to verify everything that comes out of these things. Otherwise why use it? You also can't possibly truly know if you know more about a topic since by definition the models know more than you. This is automation bias. Do you not know the problems with even verifying or watching machines? This is a core part of the discussion of self driving vehicles. I guess I assumed you knew stuff about the field of AI!
I like writing code more than reading it, personally.