Comment by CGamesPlay

Comment by CGamesPlay a day ago

39 replies

As someone who uses AI coding tools daily and has done a fair amount of experimentation with different approaches (though not Devin), I feel like this tracks pretty well. The problem is that Devin and other "agentic" approaches take on more than they can handle. The best AI coders are positioned as tools for developers, rather than replacements for them.

Github Copilot is "a better tab complete". Sure, it's a neat demo that it can produce a fast inverse square root, but the real utility is that it completes repetitive code. It's like having a dynamic snippet library always available that I never have to configure.

Aider is the next step up the abstraction ladder. It can edit in more locations than just the current cursor position, so it can perform some more high-level edit operations. And although it also uses a smarter model than Copilot, it still isn't very "smart" at the end of the day, and will hallucinate functions and make pointless changes when you give it a problem to solve.

frereubu a day ago

When I tried Copilot the "better tab complete" felt quite annoying, in that the constantly changing suggested completion kept dragging my focus away from what I was writing. That clearly doesn't happen for you. Was that something you got used to over time, or did that just not happen for you? There were elements of it I found useful, but I just couldn't get over the flickering of my attention from what I was doing to the suggested completions.

Edit: I also really want something that takes the existing codebase in the form of a VSCode project / GitHub repo and uses that as a basis for suggestions. Does Copilot do that now?

  • macNchz a day ago

    I tried to get used to the tab completion tools a few times but always found it distracting like you describe. often I’d have a complete thought, start writing the code, get a suggested completion, start reading it, realize it was wrong, but then I’d have lost my initial thought, or at least have to pause and bring myself back to it.

    I have, however, fully adopted chat-to-patch style workflows like Aider, I find it much less intrusive and distracting than the tab completions, since I can give it my entire thought rather than some code to try to complete.

    I do think there’s promise in more autonomous tools, but they still very much fall into the compounding-error traps that agents often do at the present.

  • CGamesPlay a day ago

    I have the automatic suggestions turned off. I use a keybind to activate it when I want it.

    > existing codebase in the form of a VSCode project / GitHub repo and uses that as a basis for suggestions

    What are you actually looking for? Copilot uses "all of github" via training, and your current project in the context.

    • frereubu a day ago

      > I have the automatic suggestions turned off. I use a keybind to activate it when I want it.

      I didn't realise you could do that. Might give it another go.

      > Copilot uses "all of github" via training, and your current project in the context.

      The current project context is the bit I didn't think it had. Thanks!

  • wrsh07 a day ago

    For cursor you can chat and ask @codebase and it will do rag (or equivalent) to answer your question

  • goosejuice a day ago

    Copilot is also very slow. I'm surprised people use it to be honest. Just use Cursor.

    • pindab0ter a day ago

      Cursor requires you to use their specific IDE though, doesn't it? With Copilot I don't have to switch contexts as it lives in my Jetbrains IDE.

      • goosejuice a day ago

        It's just vscode. I greatly prefer vim but the difference between vim + ai tools and cursor is just a no brainer in terms of productivity. Cursor isn't without problems but it's leagues ahead of the competition in my opinion.

  • mattnewton a day ago

    I would try cursor. It’s pretty good at copy pasting the relevant parts of the codebase in and out of the chat window. I have the tab autocomplete disabled.

  • Aeolun a day ago

    Cursor tab does that. Or at least, it takes other open tabs into account when making suggestions.

  • sincerely a day ago

    i’ve been very impressed with the gemini autocomplete suggestions in google colab, and it doesn’t feel more/less distracting than any IDEs built in tab suggestions

    • verdverm a day ago

      I think a lot of people who are enabling copilot in vs code (like I did a few days ago), are experiencing "suggested autocomplete as I type" for the first time where before there was no grey text below what I am writing personally.

      It is a huge distraction, especially if it changes as I write more. I turned it off almost immediately.

      I deeply regret turning on copilot in vscode. It (M$) immediately weaseled into so many places and settings. I'm still trying to scaled it back. Super annoying and distracting. I'd prefer a much more opt in for each feature than what they did.

the_af a day ago

> The best AI coders are positioned as tools for developers, rather than replacements for them.

I agree with this. However, we must not delude ourselves and understand that corporate is pushing for replacement. So there will be a big push to improve on tools like Devin. This is not a conspiracy theory, in many companies (my wife's, for example) they are openly stating this: we are going to reduce (aka "lay off") the engineering staff and use as much AI solutions as possible.

I wonder how many of us, here, understand that many jobs are going away if/when this works out for the companies. And the usual coping mechanism, "it will only be for low hanging fruit", "it will never happen to me because my $SKILL is not replaceable", will eventually not save you. Sure, if you are a unique expert on a unique field, but many of us don't have that luxury. Not everyone can be a top of the cream specialist. And it'll be used to drive down salaries, too.

  • lolinder a day ago

    I remember when I was first getting started in the industry the big fear of the time was that offshoring was going to take all of our jobs and drive down the salaries of those that remained. In fact the opposite happened: it was in the next 10 years that salaries ballooned and tech had a hiring bubble.

    Companies always want to reduce staff and bad companies always try to do so before the solution has really proven itself. That's what we're seeing now. But having deep experience with these tools over many years, I'm very confident that this will backfire on companies in the medium term and create even more work for human developers who will need to come in and clean up what was left behind.

    (Incidentally, this also happened with offshoring— many companies ended up with large convoluted code bases that they didn't understand and that almost did what they wanted but were wrong in important ways. These companies needed local engineers to untangle the mess and get things back on track.)

    • senordevnyc a day ago

      But having deep experience with these tools over many years, I'm very confident...

      No one has had deep experience with these tools for any amount of time, let alone many years. They're literally just now hitting the market and are rapidly expanding their capabilities. We're at a fundamentally different place than we were just twelve months ago, and there's no reason to think 2025 will be any different.

      • lolinder a day ago

        I was building things with GPT-2 in 2019. I have as much experience engineering with them as anyone who wasn't an AI researcher before then.

        And no, we're not at a fundamentally different place than we were just 12 months ago. The last 12 months had much slower growth than the 12 months before that, which had slower growth than the 12 months before that. And in the end these tools have the same weaknesses that I saw in GPT-2, just to a lesser degree.

        The only aspect in which we are in a fundamentally different place is that the hype has gone through the roof. The tools themselves are better, but not fundamentally different.

    • the_af a day ago

      I think it's qualitatively different this time.

      Unlike with offshoring, this is a technological solution, which understandably is received more enthusiastically on HN. I get it. It's interesting as tech! And it's achieved remarkable things. But unlike with offshoring (which is a people thing) or magical NOCODE/CASE/etc "solutions", it seems the consensus is that AI coding assistants will eventually get there. At least a portion of even HN seems to think so. And some are cheering!

      The coping mechanism seems to be "it won't happen to me" or "my knowledge is too specialized" but I think this will become increasingly false. And even if your knoweldge is too specialized to be replaced by AI, most engineers aren't like that. "Well, become more specialized" is unrealistic advice, and in any case, the employment pool will shrink.

      PS: I am offhsoring (in a way). I'm not based in the US but I work remotely for a US company.

      • lolinder a day ago

        > But unlike with offshoring (which is a people thing) or magical NOCODE/CASE/etc "solutions", it seems the consensus is that AI coding assistants will eventually get there.

        There's no consensus to that point. There are a few loud hype artists, most of whom are employed in AI and have so have conflicts of interest and also are pre-filtered to the true believers. Their logic is basically "See this trend? Trends continue, so this is inevitable!"

        That's bad logic. Trends do not always continue, they often slow or reverse, and this one is showing all signs of doing so already. OpenAI has come straight out and said that they don't expect to see another jump like GPT-3 to 4, and have resorted to throwing more tokens at the problems, which works with diminishing returns. I do not expect to see a return to the rapid growth we had for a year or two there.

        > PS: I am offhsoring (in a way). I'm not based in the US but I work remotely for a US company.

        Yes, and this is a good example: there's a place for offshoring, but it didn't replace US devs. The same thing will happen here.

        • senordevnyc a day ago

          Trends do not always continue, they often slow or reverse, and this one is showing all signs of doing so already. OpenAI has come straight out and said that they don't expect to see another jump like GPT-3 to 4, and have resorted to throwing more tokens at the problems, which works with diminishing returns. I do not expect to see a return to the rapid growth we had for a year or two there.

          This feels like the declaration of someone who has spent almost no time playing with these models or keeping up with AI over the last two years. Go look at the benchmarks and leaderboards for the last 18 months and tell me we're not progressing far beyond GPT4. Meanwhile models are also getting faster, cheaper, getting multi-modal capabilities, cheaper to train for a given capability, etc.

          And of course there are diminishing returns, the latest public models are in the 90s on many of their benchmarks!

  • nyarlathotep_ a day ago

    > I wonder how many of us, here, understand that many jobs are going away if/when this works out for the companies. And the usual coping mechanism, "it will only be for low hanging fruit", "it will never happen to me because my $SKILL is not replaceable", will eventually not save you. Sure, if you are a unique expert on a unique field, but many of us don't have that luxury. And it'll be used to drive down salaries, too.

    Yeah it's maddening.

    The cope is bizarre too: "writing code is the least important part of the job"

    Ok then why does nearly every company make people write code for interviews or do take home programming projects?

    Why do people list programming languages on their resumes if it's "least important"?

    Also bizarre to see people cheering on their replacements as they use all this stuff.

    • s1mplicissimus a day ago

      > Ok then why does nearly every company make people write code for interviews or do take home programming projects?

      For the same reason they put leetcode problems to "test" an applicants skill. Or have them write mergesort on a chalkboard by hand. It gives them a warm fuzzy feeling in the tummy because now they can say "we did something to check they are competent". Why, you ask? Well it's mostly impossible to come up with a test to verify a competency you don't have yourself. Imagine you can't distinguish red and green, are not aware of it, but want to hire people who can. That's their situation, but they cannot admit it - because it would be clear evidence that they are no good fit for their current role. Use this information responsibly ;)

      > Why do people list programming languages on their resumes if it's "least important"?

      You put the programming languages in there alongside the HR-soothing stuff because you hope that an actual software person gets to see your resume and gives you an extra vote for being a good match. Notice that most guides recommend a relatively small amount of technical content vs. lots of "using my awesomeness i managed to blafoo the dingleberries in a more efficient manner to earn the company a higher bottom line"

      If you don't want to be a software developer that's fine. But your questions point me towards the conclusion that you don't know a lot of things about software development in the first place which doesn't speak for your ability to estimate how easy it will be to automate it using LLMs.

      • the_af a day ago

        Arguing about programming is not the point, in my opinion.

        When AI becomes able to do most non-programming tasks too, say design or solving open-ended problems (yeah, except in trivial cases it cannot -- for now) we can have this conversation again...

        I think saying "well, programming is not important, what matters is $THING" is a coping mechanism. Eventually AI will do $THING acceptably enough for the bean counters to push for more layoffs.

    • dimitri-vs 19 hours ago

      Is spending 4 years of your life on education that will likely only be 10-20% applicable to your job any less bizarre? It's just another hoop employers want to see you capable of jumping.

      If you ignore the syntax programming is just writing detailed instructions. Just because AI is able to translate English to code doesn't mean the 100s of decisions that need to be made go away. Someone still needs to write very detailed instructions even if they are in English and it sure isn't going to be the people sitting in meetings all day.

      And let's pretend that I can now be 10x more productive with AI. Great, now I can ship 10x more features in the same timeframe and nothing changes - the development backlog is literally infinite. There are always more features or bugs to work on.

      • hatefulmoron 18 hours ago

        > Just because AI is able to translate English to code doesn't mean the 100s of decisions that need to be made go away. Someone still needs to write very detailed instructions even if they are in English and it sure isn't going to be the people sitting in meetings all day.

        What makes you think it will be you? The machines seem increasingly capable of converting English into different English, and if we take it as a given that they can convert English into code.. what are you there for? The people sitting in meetings might as well talk to the machine, to the extent they're willing to talk to you.

        To be clear, the professional "meeting participants" are as much on the chopping block as we are, although that's not commonly pointed out.

qup a day ago

It's weird to talk about aider hallucinating.

That's whatever model you chose to use with it. Aider can use any.l model you like.