Comment by ayewo

Comment by ayewo 7 days ago

3 replies

I like your framing about “feeling out of control”.

For the occasional frontend task, I don’t mind being out of control when using agentic tools. I guess this is the origin of Karpathy’s vibe coding moniker: you surrender to the LLM’s coding decisions.

For backend tasks, which is my bread and butter, I certainly want to know what it’s sending to the LLM so it’s just easier to use the chat interface directly.

This way I am fully in control. I can cherry pick the good bits out of whatever the LLM suggests or redo my prompt to get better suggestions.

fragmede 6 days ago

How do get you out the "good bits" without a diff/patch file? or do you ask the LLM for that and apply it manually?

  • ayewo 6 days ago

    Basically what antirez described about 4 days ago in this thread https://news.ycombinator.com/item?id=43929525.

    So this part of my workflow is intentionally fairly labor intensive because it involves lots of copy-pasting between my IDE and the chat interface in a browser.

    • fragmede 6 days ago

      From the linked comment: > Mandatory reminder that "agentic coding" works way worse than just using the LLM directly

      just isn't true. If everything was equal, that might possibly be true, but it turns out that system prompts are quite powerful in influencing how an LLM behaves. ChatGPT with a blank user entered system prompt behaves differently (read: poorer at coding) than one with a tuned system prompt. Aider/Copilot/Windsurf/etc all have custom system prompts that make them more powerful rather than less, compared to using a raw web browser, and also don't involve the overhead of copy pasting.