Comment by creatonez
If it's googling repeatedly (every 15 minutes!) and then processing the results with LLMs, how does it avoid hallucinations? The problem with these models is that even extremely marginal hallucinations become inevitable with enough samples, so I'm skeptical that this wouldn't cause a ton of false positives.