Comment by tlogan
At the end of the day, it comes down to one thing: knowing what you want. And AI can’t solve that for you.
We’ve experimented heavily with integrating AI into our UI, testing a variety of models and workflows. One consistent finding emerged: most users don’t actually know what they want to accomplish. They struggle to express their goals clearly, and AI doesn’t magically fill that gap—it often amplifies the ambiguity.
Sure, AI reduces the learning curve for new tools. But paradoxically, it can also short-circuit the path to true mastery. When AI handles everything, users stop thinking deeply about how or why they’re doing something. That might be fine for casual use, but it limits expertise and real problem-solving.
So … AI is great—but the current diarrhea of “let’s just add AI here” without thinking through how it actually helps might be a sign that a lot of engineers have outsourced their thinking to ChatGPT.
> They struggle to express their goals clearly, and AI doesn’t magically fill that gap—it often amplifies the ambiguity.
One surprising thing I've learned is that a fast feedback loop like this:
1. write a system prompt 2. watch the agent do the task, observe what it gets wrong 3. update the system prompt to improve the instructions
is remarkably useful in helping people write effective system prompts. Being able to watch the agent succeed or fail gives you realtime feedback about what is missing in your instructions in a way that anyone who has ever taught or managed professionally will instantly grok.