Comment by parsabg

Comment by parsabg 11 hours ago

0 replies

Good spot. I probably shouldn't have the 2nd most expensive model in the demo!

Some of the cheaper models have very similar performance at a fraction of a cost, or indeed you could use a local model for "free".

The core issue though is that there's just more tokens to process in a web browsing task than many other tasks we commonly use LLMs for, including coding.