Comment by welder

Comment by welder 7 days ago

87 replies

Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.

imiric 7 days ago

This is the way.

All this IDE churn makes me glad to have settled on Emacs a decade ago. I have adopted LLMs into my workflow via the excellent gptel, which stays out of my way but is there when I need it. I couldn't imagine switching to another editor because of some fancy LLM integration I have no control over. I have tried Cursor and VS Codium with extensions, and wasn't impressed. I'd rather use an "inferior" editor that's going to continue to work exactly how I want 50 years from now.

Emacs and Vim are editors for a lifetime. Very few software projects have that longevity and reliability. If a tool is instrumental to the work that you do, those features should be your highest priority. Not whether it works well with the latest tech trends.

  • zkry 7 days ago

    Ironically LLMs have made Emacs even more relevant. The model LLMs use (text) happens to match up with how Emacs represents everything (text in buffers). This opens up Emacs to becoming the agentic editor par excellence. Just imagine, some macro magic acound a defcommand and voila, the agent can do exactly what a user can. If only such a project could have the funding like Cursor does...

    • throwanem 7 days ago

      Nothing could be worse for the modern Emacs ecosystem than for the tech industry finance vampires ("VCs," "LPs") to decide there's blood enough there to suck.

      Fortunately, alien space magic seems immune, so far at least. I assume they do not like the taste, and no wonder.

      • imiric 6 days ago

        Why should the Emacs community care whether someone decides to build a custom editor with AI features? If anything this would bring more interest and development into the ecosystem, which everyone would benefit from. Anyone not interested can simply ignore it, as we do for any other feature someone implements into their workflow.

    • imiric 6 days ago

      I'm not sure why you were downvoted. You're right that buffers and everything being programmable makes Emacs an ideal choice for building an AI-first editor. Whether that's something that a typical Emacs user wants is a separate issue, but someone could certainly build a polished experience if they had the resources and motivation. Essentially every Emacs setup is someone's custom editor, and AI features are not different from any other customization.

  • bandoti 6 days ago

    Emacs diff tools alone is a reason to use the editor. I switch between macOS, Linux, and Windows frequently so settled on Emacs and happy with that choice as well.

  • drob518 6 days ago

    I’ve been using Aidermacs to access Aider in Emacs and it works quite well and makes lots of LLMs available. Claude Sonnet 3.7 has been reasonable for code generation, though there are certainly tasks that it seems to struggle on.

elAhmo 7 days ago

Cursor/Windsurf and similar IDEs and plugins are more than autocomplete on steroids.

Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.

It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.

  • hn_throw2025 6 days ago

    I think you’re right, and perhaps it’s time for the “autocomplete on steroids” tag to be retired, even if something approximating that is happening behind the scenes.

    I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.

    Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.

    Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.

    • kaycey2022 6 days ago

      How much did that cost you? How long did you spend reading and testing the results?

      • hn_throw2025 6 days ago

        That wasn’t really the point I was getting at, but as you asked… The reading doesn’t involve much more than a cursory (no pun intended) glance, and I didn’t test more than I would have tested something I had written manually.

        • kaycey2022 6 days ago

          Maybe it wasn't your point. But cost of development is a very important factor, considering some of the thinking models burn tokens like no tomorrow. Accuracy is another. Maybe your script is kind of trivial/inconsequential so it doesn't matter if the output has some bugs as long as it seems to work. There are a lot of throwaway scripts we write, for which LLMs are an excellent tool to use.

  • LandR 6 days ago

    I use Rider with some built in AI auto-complete. I'd say its hit rate is pretty low!

    Sometimes it auto-completes nonsense, but sometimes I think I'm about to tab on auto-completing a method like FooABC and it actually completes it to FoodACD, both return the same type but are completely wrong.

    I have to really be paying attention to catch it selecting the wrong one. I really really hate this. When it works its great, but every day I'm closer to just turning it off out of frustration.

  • alexandreblin 7 days ago
    • elAhmo 6 days ago

      Arguing that ActiveX or Silverlight are comparable to AI, seeing what changes it did bring and is bringing, is definitely a weak argument.

      A lot of people are against change because it endangers their routine, way of working, livelihood, which might be a normal reaction. But as accountants switched to using calculators and Excel sheets, we will also switch to new tools.

  • gregoryl 6 days ago

    Ahh yes, software development, the discipline that famously has difficult to measure metrics and difficulty with long term maintenance. Months indeed.

  • Draiken 6 days ago

    Where are these amazing productivity increases?

    Where is this 2x, 10x or even 1.5x increase in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".

    I keep seeing this being repeated ad nauseam without any real backing of hard evidence. It's all copium.

    Surely if everyone is so much more productive, a single person startup is now equivalent to 1 + X right?

    Please enlighten me as I'm very eager to see this impact in the real world.

    • chipsrafferty 4 days ago

      There's a bottleneck from all the other roles. Project managers, designers, etc.

      The impact in the real world isn't more product output, it's less developers needed for the same output.

  • jillyboel 6 days ago

    > is that productivity in the teams using tools like this has greatly increased

    On the short term. Have fun debugging that mess in a year while your customers are yelling at you! I'll be available for hire to fix the mess you made which you clearly don't have the capability to understand :-)

    • elAhmo 6 days ago

      Debugging any system is not easy, it is not like technical debt didn't exit before AI, people will be writing shitcode in the future as they were in the past. Probably more, but there are also more tools that help with debugging.

      Additionally, what you are failing to realise is that not everyone is just vibe coding and accepting blindly what the LLM is suggesting and deploying it to prod. There are actually people with decade+ of experience who do use these tools and who found it to be an accelerator in many areas, from writing boilerplate code, to assisting with styling changes.

      In any case, thanks for the heads up, definitely will not be hiring you with that snarky attitude. Your assumption that I have no capability to understand something without any context tells more about you than me, and unfortunately there is no AI to assist you with that.

  • wrasee 7 days ago

    I think you’re arguing a straw man

    I don’t think the point was “don’t use LLM tools”. I read the argument here as about the best way to integrate these tools into your workflow.

    Similar to the parent, I find interfacing with a chat window sufficiently productive and prefer that to autocomplete, which is just too noisy for me.

alentred 6 days ago

To be fair, I think the most value is added by Agent modes, not autocomplete. And I agree that AI-autocomplete is really quite annoying, personally I disable it too.

But coding agents can indeed save some time writing well-defined code and be of great help when debugging. But then again, when they don't work on a first prompt, I would likely just write the thing in Vim myself instead of trying to convince the agent.

My point being: I find agent coding quite helpful really, if you don't go overzealous with it.

  • Draiken 6 days ago

    Are you using these in your day job to complete real world tasks or in greenfield projects?

    I simply cannot see how I can tell an agent to implement anything I have to do in a real day job unless it's a feature so simple I could do it in a few minutes. Even those the AI will likely screw it up since it sucks at dealing with existing code, best practices, library versions, etc.

    • ativzzz 6 days ago

      I've found it useful for doing simple things in parallel. For instance, I'm working on a large typescript project and one file doesn't have types yet. So I tell the AI to add typing to it with a description while I go work on other things. I check back in 5-10 mins later and either commit the changes or correct it.

      Or if I'm working on a full stack feature, and I need some boilerplate to process a new endpoint or new resource type on the frontend, I have the AI build the api call that's similar to the other calls and process the data while I work on business logic in the backend. Then when I'm done, the frontend API call is mostly set up already

      I found this works rather well, because it's a list of things in my head that are "todo, in progress" but parallelizable so I can easily verify what its doing

    • int_19h 6 days ago

      SOTA LLMs are broadly much better at autonomous coding than they were even a few months ago. But also, it really depends on what it is exactly you're working on, and what tech is involved. Things are great if you're writing Python or TypeScript, less so with C++, and even less so with Rust and other emerging technologies.

    • klinquist 6 days ago

      I am. I've spent some time developing cursor rules where I describe best practices, etc.

  • ActionHank 6 days ago

    The few times I've tried to use an agent for anything slightly complex or on a moderately large code base it just proceeds to smeer poop all over the floor eventually backing itself into a corner.

blitzar 7 days ago

I shortcut the "cursor tab" and enable or disable it as needed. If only Ai was smart enough to learn when I do and don't want it (like clippy in the ms days) - when you are manually toggling it on/off clear patterns emerge (to me at least) as to when I do and don't want it.

  • jonwinstanley 7 days ago

    How do you do that? Sorry if it's obvious - I've looked for this functionality before and didn't spot it

    • blitzar 7 days ago

      Bottom right says "cursor tab" you can manually manipulate it there (and snooze for X minutes - interesting feature). For binding shortcuts - Command/Ctrl + Shift + P, then look for "Enable|Disable|Whatever Cursor Tab" and set shortcuts there.

      Old fashioned variable name / function name auto complete is not affected.

      I considered a small macropad to enable / disable with a status light - but honestly don't do enough work to justify avoiding work by finding / building / configuring / rebuilding such a solution. If the future is this sort of extreme autocomplete in everything I do on a computer, I would probably go to the effort.

      • jonwinstanley 7 days ago

        Thanks!

        The thing that bugs me is when Im trying to use tab to indent with spaces, but I get a suggestion instead.

        I tried to disable caps lock, then remap tab to caps lock, but no joy

nsteel 7 days ago

I can't even get simple code generation to work for VHDL. It just gives me garbage that does not compile. I have to assume this is not the case for the majority of people using more popular languages? Is this because the training data for VHDL is far more limited? Are these "AIs" not able to consume the VHDL language spec and give me actual legal syntax at least?! Or is this because I'm being cheap and lazy by only trying free chatGPT and I should be using something else?

  • kaycey2022 6 days ago

    Its all of that to some extent or the other. LLMs don't update overnight and as such lag behind innovations in major frameworks, even in web development. No matter what is said about augmenting their capabilities, their performance using techniques like RAG seem to be lacking. They don't work well with new frameworks either.

    Any library that breaks backwards compatibility in major version releases will likely befuddle these models. That's why I have seen them pin dependencies to older versions, and more egregiously, default to using the same stack to generate any basic frontend code. This ignores innovations and improvements made in other frameworks.

    For example, in Typescript there is now a new(ish) validation library call arktype. Gemini 2.5 pro straight up produces garbage code for this. The type generation function accepts an object/value. But gemini pro keeps insisting that it consumes a type.

    So Gemini defines an optional property as `a?: string` which is similar to what you see in Typescript. But this will fail in arktype, because it needs it input as `'a?': 'string'`. Asking gemini to check again is a waste of time, and you will need enough familiarity with JS/TS to understand the error and move ahead.

    Forcing development into an AI friendly paradigm seems to me a regressive move that will curb innovation in return for boosts in junior/1x engineer productivity.

    • drob518 6 days ago

      Yep, management dreams of being able to make every programmer a 10x programmer by handing them an LLM, but the 10x programmers are laughing because they know how far off the rails the LLM will go. Debugging skills are the next frontier.

    • cube00 6 days ago

      It's fun watching the AI bros try to spin justifications for building (sorry, vibing) new apps using Ruby for no reason other then the model has so much content back to 2004 to train off.

  • WD-42 6 days ago

    They are probably really good at React. And because that ecosystem has been in a constant cycle of reinventing the wheel, they can easily pump out boilerplate code because there is just so much of it to train from.

  • drob518 6 days ago

    The amount of training data available certainly is a big factor. If you’re programming in Python or JavaScript, I think the AIs do a lot better. I write in Clojure, so I have the same problem as you do. There is a lot less HDL code publicly available, so it doesn’t surprise me that it would struggle with VHDL. That said, from everything I’ve read, free ChatGPT doesn’t do as well on coding. OpenAI’s paid models are better. I’ve been using Anthropic’s Claude Sonnet 3.7. It’s paid but it’s very cost effective. I’m also playing around with the Gemini Pro preview.

  • TingPing 6 days ago

    It completely fails to be helpful as a C/C++. I don’t understand the positivity around it but it must be trained on a lot of web frameworks.

    • y-curious 6 days ago

      It's very helpful for low level chores. The bane of my existence is frontend, and generating UI elements for testing backend work on the fly rocks. I like the analogy of it being a junior dev; Perhaps even an intern. You should check their work constantly and give them extremely pedantic instructions

InsideOutSanta 7 days ago

Yeah, I use IntelliJ with the chat sidebar. I don't use autocomplete, except in trivial cases where I need to write boilerplate code. Other than that, when I need help, I ask the LLM and then write the code based on its response.

I'm sure it's initially slower than vibe-coding the whole thing, but at least I end up with a maintainable code base, and I know how it works and how to extend it in the future.

medhir 7 days ago

+100. I’ve found the “chat” interface most productive as I can scope a problem appropriately.

Cursor, Windsurf, etc tend to feel like code vomit that takes more time to sift through than working through code by myself.

Draiken 6 days ago

Same here. It's extremely distracting to see the random garbage that the autocomplete keeps trying to do.

I said this in another comment but I'll repeat the question: where are these 2x, 10x or even 1.5x increases in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".

I keep seeing this being repeated ad nauseam without any real backing of hard evidence.

If this was true and every developer had even a measly 30% increase in productivity, it would be like a team of 10 is now 13. The amount of code being produced would be substantially more and as a result we should see an absolute boom in new... everything.

New startups, new products, new features, bugs fixed and so much more. But I see absolutely nothing but more bullshit startups that use APIs to talk to these models with a few instructions.

Please someone show me how I'm wrong because I'd absolutely love to magically become way more productive.

  • nlh 6 days ago

    I am but a small humble minority voice here but perhaps I represent a larger non-HN group:

    I am not a professional SWE; I am not fluent in C or Rust or bash (or even Typescript) and I don't use Emacs as my editor or tmux in the terminal;

    I am just a nerdy product guy who knows enough to code dangerously. I run my own small business and the software that I've written powers the entire business (and our website).

    I have probably gotten a AT LEAST a 500-1000% speedup in my personal software productivity over the past year that I've really leaned into using Claude/Gemini (amazing that GPT isn't on that list anymore, but that's another topic...) I am able to spec out new features and get them live in production in hours vs. days and for bigger stuff, days vs weeks (or even months). It has changed the pace and way in which I'm able to build stuff. I literally wrote an entire image editing workflow to go from RAW camera shot to fully processed product image on our ecommerce store that's cut out actual, real, dozens of hours of time spent previously.

    Is the code I'm producting perfect? Absolutely not. Do I have 100% test coverage? Nope. Would it pass muster if I were a software engineer at Google? Probably not.

    Is it working, getting to production faster, and helping my business perform better and insanely more efficiently? Absolutely.

    • Draiken 6 days ago

      I think that tracks with what I see: LLMs enable non-experts to do something really fast.

      If I want to, let's say, create some code in a language I never worked on an LLM will definitely make me more "productive" by spewing out code for me way faster than I could write it. Same if I try to quickly learn about a topic I'm not familiar with. Especially if you don't care about the quality, maintainability, etc. too much.

      But if I'm already a software developer with 15 years of experience dealing with technology I use every day, it's not going to increase my productivity in any meaningful way.

      This is the dissonance I see with AI talk here. If you're not a software developer the things LLMs enable you to do are game-changers. But if you are a good software developer, in its best days it's a smarter autocomplete, a rubber-duck substitute (when you can't talk to a smart person) or a mildly faster google search that can be very inaccurate.

      If you go from 0 to 1 that's literally infinitely better but if you go from 100 to 105, it's barely noticeable. Maybe everyone with these absurd productivity gains are all coming from zero or very little knowledge but for someone that's been past that point I can't believe these claims.

  • [removed] 6 days ago
    [deleted]
nsonha 6 days ago

Your comment is about 2 years late. Autocomplete is not the focus of AI IDEs anymore, even though it has gotten really good with "next edit predicion". People use AI these days use it for the agentic mode.

admiralrohan 7 days ago

That is interesting. Which tech are you using?

Are you getting irrelevant suggestions as those autocompletes are meant to predict the things you are about to type.

chironjit 6 days ago

Absolutely hate the agent mode but I find autocomplete with asks to be the best for me. I like to at least know what I'm putting in my codebase and it genuinely makes me faster due to:

1) Stops me overthinking the solution 2)Being able to ask it pros and cons of different solutions 3) multi-x speedup means less worry about throwing away a solution/code I don't like and rewriting / refactoring 4) Really good at completing certain kinds of "boilerplate-y" code 5) Removed need to know the specific language implementation but rather the principle (for example pointers, structs, types, mutexes, generics, etc). My go to rule now is that I won't use it if I'm not familiar with the principle, and not the language implementation of that item 6) Absolute beast when it comes to debugging simple to medium complexity bugs

xnorswap 7 days ago

AI autocomplete can be infuriating if like me, you like to browse the public methods and properties by dotting the type. The AI autocomplete sometimes kicks in and starts writing broken code using suggestions that don't exist and that prevents quickly exploring the actual methods available.

I have largely disabled it now, which is a shame, because there are also times it feels like magic and I can see how it could be a massive productivity lever if it needed a tighter confidence threshold to kick in.

  • prisenco 7 days ago

    If I can, I map it to ctrl-; so I can bring it up when I need it.

    But I found once it was optional I hardly ever used it.

    I use Deepseek or others as a conversation partner or rubber duck, but I'm perfectly happy writing all my code myself.

    Maybe this approach needs a trendy name to counter the "vibe coding" hype.

rco8786 6 days ago

This is where I landed too. Used Cursor for a while before realizing that it was actually slowing me down because the PR cycle took so much longer, due to all the subtle bugs in generated code.

Went back to VSCode with a tuned down Copilot and use the chat or inline prompt for generating specific bits of code.

  • davidmurdoch 6 days ago

    What is a "PR cycle"?

    • klinquist 6 days ago

      open a pull request, reviewer finds a bug and asks for changes, you make changes and re-request a review...

      • davidmurdoch 6 days ago

        That's what I was afraid of, I'd never think anyone submitting AI-generated code wouldn't first read it themselves before asking others to review it!

nyarlathotep_ 6 days ago

I'm past the honeymoon stage for LLM autocomplete.

I just noticed CLion moved to a community license, so I re-installed it and set up Copilot integration.

It's really noisy and somehow the same binding (tab complete) for built in autocomplete "collides" with LLM suggestions (with varying latency). It's totally unusable in this state; you'll attempt to populate a single local variable or something and end up with 12 lines of unrelated code.

I've had much better success with VSCode in this area, but the complete suggestions via LLM in either are usually pretty poor; not sure if it's related to the model choice differing for auto complete or what, but it's not very useful and often distracting, although it looks cool.

kristopolous 7 days ago

Agreed. You may like the arms-length stuff here: https://github.com/day50-dev/llmehelp . shell-hook.zsh and screen-query have been life-changing

I always forget syntax for things like ssh port forwarding. Now just describe it at the shell:

$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy

or maybe:

$ ffmpeg -ss 0:10:00 -i somevideo.mp4 -t 1:00 (speed it up 2x) out.webm

I press ctrl+x x and it will replace the english with a suggested command. It's been a total game changer for git, jq, rsync, ffmpeg, regex..

For more involved stuff there's screen-query: Confusing crashes, strange terminal errors, weird config scripts, it allows a joint investigation whereas aider and friends just feels like I'm asking AI to fuck around.

  • nicce 7 days ago

    This never accesses any extradata and works only when explicitly asked? I find terminal as most important part from privacy perspective and I haven’t tried any LLM integration yet…

    • kristopolous 7 days ago

      It is intentionally non-agentic and only runs when invoked.

      For extradata it sends uname and the procname when it captures such as "nvim" or "ipython" and that's it.

      • kristopolous 6 days ago

        I also realized this morning that shell-hook is good enough to typo correct. I have that turned on at the shell level (setopt correct) but sometimes it doesn't work like here

        git cloen blahalalhah

        I did a ctrl+x x and it fixed it. I'm using openrouter/google/gemma-3-27b-it:free via chutes. Not a frontier model in the slightest.

aqme28 7 days ago

I thought Cursor was dumb and useless too when I was just using autocomplete. It's the "agent chat" on the sidebar that is where it really shines.

wutwutwat 6 days ago

What folks don't understand, or keep in mind maybe, is that in order for that autocomplete to work, all your code is going up to a third party as you write it or open files. This is one of the reasons I disable it. I want to control what I send via the chat side panel by explicitly giving it context. It's also pretty useless most of the time, generating nonsense and not even consistently either.

et1337 6 days ago

I was 100% in agreement with you when I tried out Copilot. So annoying and distracting. But Cursor’s autocomplete is nothing like that. It’s much less intrusive and mostly limits itself to suggesting changes you’ve already done. It’s a game changer for repetitive refactors where you need to do 50 nearly identical but slightly different changes.

raverbashing 7 days ago

Yeah

AI autocomplete is a feature, not a product (to paraphrase SJ)

I can understand Windsurf getting the valuation as they had their own Codeium model

$B for a VSCode fork? Lol

  • nicce 7 days ago

    Microsoft seems to be always winner - maybe they predicted all this and for this reason they made core extensions closed source.

[removed] 7 days ago
[deleted]
vitro 7 days ago

I had turned autocomplete off as well. Way too many times it was just plain wrong and distracting. I'd like it to be turned on for method documentation only, though, where it worked well once the method was completed, but so far I wasn't able to customize it this way.

  • cube00 6 days ago

    I've be very surprised if the LLM is correctly identifying the "why" that method documentation should capture.

whywhywhywhy 7 days ago

Having it as tab was a mistake, tab complete for snippets is fine because it’s at the end of a line, tab complete in empty text space means you always have to be aware if it’s in autocomplete context or not before setting an indent.

aldanor 7 days ago

We have an internal ban policy on copilot for IP reasons and while I was... missing it initially, now just using neovim without any AI feels fine. Maybe I'll add an avante.nvim for a built-in chat box though.

owendarko 6 days ago

You could also use these AI coding features on a plug-and-play basis with an IDE extension.

For example, VS Code has Cline & Kilo Code (disclaimer: I help maintain Kilo).

Jetbrains has Junie, Zencoder, etc.

herdrick 6 days ago

The chat in what tool? Not Cursor nor Windsurf, it sounds like?

anshumankmr 7 days ago

It sometimes works really well, but I have at times been hampered by its autocomplete.

aaomidi 6 days ago

Honestly, the only files I like this turned on is unit tests.