Comment by hn_throw2025

Comment by hn_throw2025 7 days ago

3 replies

I think you’re right, and perhaps it’s time for the “autocomplete on steroids” tag to be retired, even if something approximating that is happening behind the scenes.

I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.

Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.

Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.

kaycey2022 6 days ago

How much did that cost you? How long did you spend reading and testing the results?

  • hn_throw2025 6 days ago

    That wasn’t really the point I was getting at, but as you asked… The reading doesn’t involve much more than a cursory (no pun intended) glance, and I didn’t test more than I would have tested something I had written manually.

    • kaycey2022 6 days ago

      Maybe it wasn't your point. But cost of development is a very important factor, considering some of the thinking models burn tokens like no tomorrow. Accuracy is another. Maybe your script is kind of trivial/inconsequential so it doesn't matter if the output has some bugs as long as it seems to work. There are a lot of throwaway scripts we write, for which LLMs are an excellent tool to use.