Comment by arnorhs
I agree with you, however your approach results in much longer LLM development runs, increased token usage and a whole lot of repetitive iterations.
I agree with you, however your approach results in much longer LLM development runs, increased token usage and a whole lot of repetitive iterations.
I’m definitely interested in reducing token usage techniques. But with one session one problem I’ve never hit a context limit yet, especially when the problem is small and clearly defined using divide-and-conquer. Also, agentic models are improving at tool use and should require fewer tokens. I’ll take as many iterations as needed to ensure the code is correct.