Comment by Over2Chars

Comment by Over2Chars 4 days ago

3 replies

You have a fair point. Some LLMs are better at some tasks, and prompts can make a difference no doubt.

Perhaps at some point there will be a triage LLM to slurp up the problem and then decide which secondary LLM is most optimal for that query, and some tertiary LLMs that execute and evaluate it in a virtual machine, etc.

Maybe someday

NavinF 4 days ago

Oh I talked to some guys who started a company that does that. This was at an AI meetup in SF last year. They were mainly focused on making $/token cheaper by directing easy/dumb queries to smaller dumber models, but it also increases output quality because some models are just better at certain things. I'm sure all the big companies already have implementations of this by now even if they don't use it everywhere