Comment by creatonez

Comment by creatonez 5 hours ago

0 replies

If it's googling repeatedly (every 15 minutes!) and then processing the results with LLMs, how does it avoid hallucinations? The problem with these models is that even extremely marginal hallucinations become inevitable with enough samples, so I'm skeptical that this wouldn't cause a ton of false positives.