Comment by fragmede

Comment by fragmede 7 days ago

6 replies

I suspect that if you're a vim user those friction points are a bit different. For me, Aider's git auto commit and /undo command are what sells it for me at this current junction of technology. OpenHands looks promising, though rather complex.

movq 7 days ago

The (relative) simplicity is what sells aider for me (it also helps that I use neovim in tmux).

It was easy to figure out exactly what it's sending to the LLM, and I like that it does one thing at a time. I want to babysit my LLMs and those "agentic" tools that go off and do dozens of things in a loop make me feel out of control.

  • charlie0 7 hours ago

    I like to be the human in the loop and everytime it does something I don't like, I will add a rule in conventions.md. Overtime, I watch it less and less.

  • ayewo 7 days ago

    I like your framing about “feeling out of control”.

    For the occasional frontend task, I don’t mind being out of control when using agentic tools. I guess this is the origin of Karpathy’s vibe coding moniker: you surrender to the LLM’s coding decisions.

    For backend tasks, which is my bread and butter, I certainly want to know what it’s sending to the LLM so it’s just easier to use the chat interface directly.

    This way I am fully in control. I can cherry pick the good bits out of whatever the LLM suggests or redo my prompt to get better suggestions.

    • fragmede 6 days ago

      How do get you out the "good bits" without a diff/patch file? or do you ask the LLM for that and apply it manually?

      • ayewo 6 days ago

        Basically what antirez described about 4 days ago in this thread https://news.ycombinator.com/item?id=43929525.

        So this part of my workflow is intentionally fairly labor intensive because it involves lots of copy-pasting between my IDE and the chat interface in a browser.

        • fragmede 6 days ago

          From the linked comment: > Mandatory reminder that "agentic coding" works way worse than just using the LLM directly

          just isn't true. If everything was equal, that might possibly be true, but it turns out that system prompts are quite powerful in influencing how an LLM behaves. ChatGPT with a blank user entered system prompt behaves differently (read: poorer at coding) than one with a tuned system prompt. Aider/Copilot/Windsurf/etc all have custom system prompts that make them more powerful rather than less, compared to using a raw web browser, and also don't involve the overhead of copy pasting.