Comment by petekoomen
Comment by petekoomen a day ago
> They struggle to express their goals clearly, and AI doesn’t magically fill that gap—it often amplifies the ambiguity.
One surprising thing I've learned is that a fast feedback loop like this:
1. write a system prompt 2. watch the agent do the task, observe what it gets wrong 3. update the system prompt to improve the instructions
is remarkably useful in helping people write effective system prompts. Being able to watch the agent succeed or fail gives you realtime feedback about what is missing in your instructions in a way that anyone who has ever taught or managed professionally will instantly grok.
What I've found with agents is that they stray from the task and even start to flip flop on implementations, going back and forth on a solution. They never admit they don't know something and just brute force a solution even though the answer cannot be found without trial and error or actually studying the problem. I repeatedly fall back to reading the docs and just finishing the job myself as the agent just does not know what to do.