Comment by Retric
I’m not so sure it’s going to even do that much. People are currently happy to use LLM’s, but the outputs aren’t accurate and don’t seem to be improving quickly.
A YouTuber watch regularly includes questions they asked Chat GPT and very single time there’s a detailed response in the comments showing how the output is wildly wrong from multiple mistakes.
I suspect the backlash from disgruntled users is going to hit the industry hard and these models are still extremely expensive to keep updated.
Using function calls for correct answer lookup already practically eliminates this, it's not wide spread yet, but the ease of doing it is already practical for many.
New models aren't being trained specifically on single answers which will only help.
The expense for the larger models is something to be concerned about. Small models with function calls is already great, especially if you narrow down what they are being used for. Not seeing their utility is just a lack of imagination.