Comment by Neywiny
I always ask this question about these bots: is the literature the training data or is the understanding of literature the training data? Meaning, sure you trained the bot on the current rules and regulations. But does that mean the model weights contain them? Such that really is a guess at legal accuracy? Or is it trained to be a lawyer and understand the docs which sit outside the model? Every time I've asked the answer is the former, and to me that's the wrong approach. But I'm not an AI scientist so I don't know how hard my theoretically perfect solution is.
What I do know is that if it was done my way it would be pretty easy for it to do what the Google AI does. Say it's not responsible, give links for humans to fact check it. I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.
> I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.
I’ve noticed that Google does a fair job at linking to relevant sources but it’s still fairly common for it to confabulate something that source doesn’t say or even directly contradicts. It seems to hit the underlying inability to reason where if the source covers more than one thing it’s prone to taking an input “X does A while Y does B” and emitting “Y does A” or “X does A and B”. It’s a fascinating failure mode which seems to be insurmountable.