Comment by qsort
This resonates with me a lot:
> As ever, I wish we had better tooling for using LLMs which didn’t look like chat or autocomplete
I think part of the reason why I was initially more skeptical than I ought to have been is because chat is such a garbage modality. LLMs started to "click" for me with Claude Code/Codex.
A "continuously running" mode that would ping me would be interesting to try.
On the one hand, I agree with this. The chat UI is very slow and inefficient.
But on the other, given what I know about these tools and how error-prone they are, I simply refuse to give them access to my system, to run commands, or do any action for me. Partly due to security concerns, partly due to privacy, but mostly distrust that they will do the right thing. When they screw up in a chat, I can clean up the context and try again. Reverting a removed file or messed up Git repo is much more difficult. This is how you get a dropped database during code freeze...
The idea of giving any of these corporations such privileges is unthinkable for me. It seems that most people either don't care about this, or are willing to accept it as the price of admission.
I experimented with Aider and a self-hosted model a few months ago, and wasn't impressed. I imagine the experience with SOTA hosted models is much better, but I'll probably use a sandbox next time I look into this.