Comment by reliablereason
Comment by reliablereason 12 hours ago
Looks like the example video is extremely expensive. It racks up almost 2$ of usage in about a minute.
Comment by reliablereason 12 hours ago
Looks like the example video is extremely expensive. It racks up almost 2$ of usage in about a minute.
Good spot. I probably shouldn't have the 2nd most expensive model in the demo!
Some of the cheaper models have very similar performance at a fraction of a cost, or indeed you could use a local model for "free".
The core issue though is that there's just more tokens to process in a web browsing task than many other tasks we commonly use LLMs for, including coding.