simianwords 4 hours ago

This is completely incorrect. This is exactly what LLMs can do better.

  • sjsjshzhhz 3 hours ago

    Somebody should tell the Claude code team then. They’ve had some perf issues for awhile now.

    More seriously, the concept of trust is extremely lossy. The LLM is gonna lean in one direction that may or may not be correct. At the extreme, it wound likely refute a new discovery that went against what we currently know. In a more realistic version, certain AIs are more pro Zionist than others.

    • simianwords 3 hours ago

      I meant that LLMs can be trusted to do searches and not hallucinate while doing it. You’ve taken that to mean it can comply with anything.

      The thing is, LLMs are quite good at search and probably way way more strong that whatever RAG setup this company has. What failure mode are you looking at from a search perspective? Will ChatGPT just end up providing random links?