Comment by an0malous
Comment by an0malous 11 hours ago
> These are not going to be problems tomorrow because the technology will shift. As it happened many time in the span of the last 2 years.
What technology shifts have happened for LLMs in the last 2 years?
One example is that there used to be a whole complex apparatus around getting models to do chain of thought reasoning, e.g., LangChain. Now that is built in as reasoning and they are heavily trained to do it. Same with structured outputs and tool calls — you used to have to do a bunch of stuff to get models to produce valid JSON in the shape you want, now it’s built in and again, they are specifically trained around it. It used to be you would have to go find all relevant context up front and give it to the model. Now agent loops can dynamically figure out what they need and make the tool calls to retrieve it. Etc etc.