My tips for using LLM agents to create software
(efitz-thoughts.blogspot.com)61 points by efitz 7 hours ago
61 points by efitz 7 hours ago
One weird trick is to tell the LLM to ask you questions about anything that’s unclear at this point. I tell it eg to ask up to 10 questions. Often I do multiple rounds of these Q&A and I‘m always surprised at the quality of the questions (w/ Opus). Getting better results that way, just because it reduces the degrees of freedom in which the agent can go off in a totally wrong direction.
Oh great.
LLM -> I've read 1000x stack overflow posts on this. The way coding works, is I produce sub-standard code, and then show it to others on stackoverflow! Others chime in with fixes!
You -> Get the LLM to simulate this process, by asking to to post its broken code, then asking for "help" on "stackoverflow" (eg, the questions it asks), and then after pasting the fix responses.
Hands down, you've discovered why LLM code is so junky all the time. Every time it's seen code on SO and other places, it's been "Here's my broken code" and then Q&A followed by final code. Statistically, symbolically, that's how (from an LLM perspective) coding tends to work.
Because of course many code examples it's seen are derived from this process.
So just go through the simulated exchange, and success.
And the best part is, you get to go through this process every time, to get the final fixed code.
> One of the weird things I found out about agents is that they actually give up on fixing test failures and just disable tests. They’ll try once or twice and then give up.
Its important to not think in terms of generalities like this. How they approach this depends on your tests framework, and even on the language you use. If disabling tests is easy and common in that language / framework, its more likely to do it.
For testing a cli, i currently use run_tests.sh and never once has it tried to disable a test. Though that can be its own problem when it hits 1 it can't debug.
# run_tests.sh # Handle multiple script arguments or default to all .sh files
scripts=("${@/#/./examples/}")
[ $# -eq 0 ] && scripts=(./examples/*.sh)
for script in "${scripts[@]}"; do
[ -n "$LOUD" ] && echo $script
output=$(bash -x "$script" 2>&1) || {
echo ""
echo "Error in $script:"
echo "$output"
exit 1
}
doneecho " OK"
----
Another tip. For a specific tasks don't bother with "please read file x.md", Claude Code (and others) accept the @file syntax which puts that into context right away.
If I paid for my API usage directly instead of the plan it'd be like a second mortgage.
To be fair, allocating some token for planning (recursively) helps a lot. It requires more hands on work, but produce much better results. Clarifying the tasks and breaking them down is very helpful too. Just you end up spending lots of time on it. On the bright side, Qwen3 30B is quite decent, and best of all "free".
It's quite simple.
I perfer building and using software that is robust, heavily tested and thoroughly reviewed by highly experienced software engineers who understand the code, can detect bugs and can explain what each line of code they write does.
Today, we are now in the phase where embracing mediocre LLM generated code over heavily tested / scrutinized code is now encoraged in this industry - because of the hype of 'vibe coding'.
If you can't even begin to explain the code or point out any bugs generated by LLMs or even off-load architectural decisions to them, you're going to have a big problem in explaining that in code review situations or even in a professional pair-programming scenario.
> I perfer building and using software that is robust, heavily tested and thoroughly reviewed by highly experienced software engineers who understand the code, can detect bugs and can explain what each line of code they write does.
that's amazing. by that logic you probably use like one or two pieces of software max. no windows, macos or gnome for you.
LOL.. I was going to say after working in the tech industry.. half the time it is a rats nest in there.
There are excellent engineers.. but their are also many not so great engineers and once the sausage is made it usually isn't a pretty picture inside.
Usually only small young projects or maybe a beautiful component or two. Almost never an entire system/application.
Unfortunately, all of modern software depends on some random obscure dependency that is not properly reviewed https://xkcd.com/2347/
I’ve seen going very successfully using both codex with gpt5 and claude code with opus. You develop a solution with one, then validate it with the other. I’ve fixed many bugs by passing the context between them saying something like: “my other colleague suggested that…”. Bonus thing: I’ve started using symlinks on CLAUDE.md files pointing at AGENTS.md, now I don’t even have to maintain two different context files.