Comment by rbren

Comment by rbren a day ago

17 replies

I'm one of the creators of OpenHands (fka OpenDevin). I agree with most of what's been said here, wrt to software agents in general.

We are not even close to the point where AI can "replace" a software engineer. Their code still needs to be reviewed and tested, at least as much as you'd scrutinize the code of a brand new engineer just out of boot camp. I've talked to companies who went all in on AI engineers, only to realize two months later that their codebase was rotting because no one was reviewing the changes.

But once you develop some intuition for how to use them, software agents can be a _massive_ boost to productivity. ~20% of the commits to the OpenHands codebase are now authored or co-authored by OpenHands itself. I especially love asking it to do simple, tedious things like fixing merge conflicts or failing linters. It's great at getting an existing PR over the line.

It's also important to keep in mind that these agents are literally improving on a _weekly_ basis. A few weeks ago we were at the top of the SWE-bench leaderboard; now there are half a dozen agents that have pulled ahead of us. And we're one launch away from leapfrogging back to the top. Exciting times!

https://github.com/All-Hands-AI/OpenHands

jebarker a day ago

> code still needs to be reviewed and tested, at least as much as you'd scrutinize the code of a brand new engineer just out of boot camp

> ..._massive_ boost to productivity. ~20% of the commits to the OpenHands codebase are now authored or co-authored by OpenHands itself.

I'm having trouble reconciling these statements. Where does the productivity boost come from since that reviewing burden seems much greater than you'd have if you knew commits were coming from a competent human?

  • lars512 a day ago

    There's often a lot of small fixes that not time efficient to do, but a solution is not much code and is quick to verify.

    If the cost is small to setting a coding agent (e.g. aider) on a task, seeing if it reaches a quick solution, and just aborting if it spins out, you can solve a subset of these types of issues very quickly, instead of leaving them in issue tracking to grow stale. That lets you up the polish on your work.

    That's still quite a different story to having it do the core, most important part of your work. That feels a little further away. One of the challenges is the scout rule, the refactoring alongside change that makes the codebase nicer. I feel like today it's easier to get a correct change that slightly degrades codebase quality, than one that maintains it.

    • jebarker a day ago

      Thanks - this all makes sense - I still don't feel like this would constitute a massive productivity boost in most cases, since it's not fixing time consuming major issues. But I can see how it's nice to have.

      • rbren a day ago

        The bigger win comes not from saving keystrokes, but from saving you from a context switch.

        Merge conflicts are probably the biggest one for me. I put up a PR and move onto a new task. Someone approves, but now there are conflicts. I could switch off my task, spend 5-10 min remembering the intent of this PR and fixing the issues. Or I could just say "@openhands fix the merge conflicts" and move back to my new task.

        • svieira a day ago

          The issue is that you still need to review the fixed PR (or someone else does) which means you just deferred the context switch, you didn't eliminate it. And if the fix is in a new commit, that's possible (whereas if it rebases you have to remember your old SHA).

          Playing the other side, pipelining is real.

  • lolinder a day ago

    I haven't started doing this with agents, but with autocomplete models I know exactly what OP is talking about: you stop trying to use models for things that models are bad at. A lot of people complain that Copilot is more harm than good, but after a couple of months of using it I figured out when to bother and when not to bother and it's been a huge help since then.

    I imagine the same thing applies to agents. You can waste a lot of time by giving them tasks that are beyond them and then having to review complicated work that is more likely to be wrong than right. But once you develop an intuition for what they can and cannot do you can act appropriately.

  • drewbug01 a day ago

    I suspect that many engineers do not expend significant energy on reviewing code; especially if the change is lengthy.

  • linsomniac a day ago

    >burden seems much greater than...

    Because the burden is much lower than if you were authoring the same commit yourself without any automation?

    • jebarker a day ago

      Is that true? I'd like to think my commits are less burdensome to review than a fresh out of boot camp junior dev especially if all that's being done is fixing linter issues. Perhaps there's a small benefit, but doesn't seem like a major productivity boost.

      • ErikBjare a day ago

        A junior dev is not a good approximation of the strengths and weaknesses of these models.

sureglymop 10 hours ago

My biggest issue is just how often these agents make subtle, hard to spot mistakes.

It can seem great for certain tasks at first. Yesterday I had to add license headers to the top of a lot of source code files. The reason why I let the AI try is because the repository contained lots of different programming languages.

It was able to do this but I then realized that it just removed the last sentence of the text it was supposed to add.

bufferoverflow a day ago

We've seen exponential improvements in LLM's coding abilities. Went from almost useless to somewhat useful in like two years.

Claude 3.5 is not bad really. I wanted to do a side project that has been on my mind for a few years, and Claude coded it in like 30 seconds.

So to say "we are not even close" seems strange. If LLMs continue to improve, they will be comparable to mid level developers in 2-3 years, senior developers in 4-5 years.

  • Zanfa a day ago

    > So to say "we are not even close" seems strange. If LLMs continue to improve, they will be comparable to mid level developers in 2-3 years, senior developers in 4-5 years.

    These sorts of things can’t be extrapolated. It could be 6-months, it could be a local maxima / dead end that’ll take another breakthrough in 10 years like transformers were. See self-driving cars.

  • barrell 16 hours ago

    I think the most you could say is we’ve had improvements - from gpt 4 to whatever the current model is has definitely not been exponential improvements.

    My experience is acctually they’ve become dramatically less helpful over the past two years (past year in particular). Claude seems not to have backslid much but it’s progression also has not been very fast at all (I’ve noticed no difference since the 3.5 launch despite several updates).

    Everything grows sinusoidally and I feel we’re well past the tipping point into diminishing rate of improvement

veggieroll a day ago

What does the cost look like for running OpenHands yourself? From your docs, it looks like you recommend Sonnet @ $3 / million tokens. But I could imagine this can add up quickly if you are sending large portions of the repository at a time as context.