shermantanktop 2 days ago

This is consistent pattern.

Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.

Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?

  • SoftTalker 2 days ago

    I would imagine it has a lot to do with the programming language and other technologies in the project. The LLMs have tons of training data on JS and React. They probably have relatively little on Erlang.

    • Blackarea 2 days ago

      Mass of learning material doesn't equal quality though. The amount of poor react code out there is not to underestimate. I feel like llm generated gleam code was way cleaner (after some agentic loops due to syntactic misunderstanding) than ts/react where it's so biased to produce overly verbose slob.

    • stickfigure 2 days ago

      Even if you're using JS/React, the level of sophistication of the UI seems to matter a lot.

      "Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.

      But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.

      • nathanlied 2 days ago

        This has been my experience, effectively.

        Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.

        Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.

    • mchaver a day ago

      I have had good results with languages like Haskell and ReScript. They have much smaller code bases than JS and Python.

    • neilv 2 days ago

      It's like when your frat house has a filing cabinet full of past years' essays.

      Protestant Reformation? Done, 7 years ago, different professor. Your brothers are pleased to liberate you for Saturday's house party.

      Barter Economy in Soviet Breakaway Republics? Sorry, bro. But we have a Red Square McDonald's feasibility study; you can change the names?

  • DANmode a day ago

    If you’re bad at talking to people, you’ll be bad at using present-day LLMs.

    Sorry to anyone whose feelings this hurts.

  • joquarky 2 days ago

    Semantics are very important.

    Not everyone cares to be precise with their semantics.