Comment by 0xbadcafebee
Comment by 0xbadcafebee 15 hours ago
> What about all the input data into LLMs and the conversations we're having? That must be able to produce a better next gen model, no?
Better models are largely coming from training, tuning, and specific "techniques" discovered to do things like eliminate loops and hallucinations. Human inputs are a small portion of that; you'll notice that all models are getting better despite the fact that all these companies have different human inputs! A decent amount of the models' abilities come from properties like temperature/p-settings, which is basically introducing variable randomness. (these are now called "low" and "high" in frontier models) This can cause problems, but also increased capability, so the challenge isn't getting better input, it's better controlling randomness (sort of). Even coding models benefit from a small amount of this. But there is a lot more, so overall model improvements are not one thing, they are many things that are not novel. In fact, open models get novel techniques before the frontier does, it's been like that for a while.
> Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.
If you don't want the improvements, that's up to you; I'm just saying the frontier has no advantage here, and if people want better than frontier, it's there for free.
> Why wouldn't e.g. Gemini just add a triage step? And are you sure it's that much easier to get a better model for math than the big ones?
They already do have triage steps, but despite that, they still create specific models for specific use-cases. Most people already choose Thinking by default for general queries, and coding models for coding. That will continue, but there will be more providers of more specific models that will outperform frontier models, for the simple fact that there's a million use-cases out there and lots of opportunity for startups/community to create a better tailored model for cheaper. And soon all our computers will be decent at doing AI locally, so why pay for frontier anyway? I can already AI-code locally on a 4 year old machine. Two years from now, there likley won't be a need for you to use a cloud service at all, because your local machine and a local model will be equivalent, private, and free.
Thank you. You have somewhat shifted my beliefs in a meaningful way.