Comment by sfn42
As someone who has been fairly negative towards AI until recently, the problem is how you use it.
If you just tell it some vague feature to make, it's gonna do whatever it's gonna do and maybe it will be good, maybe it won't. It probably won't. The more specific you are the better it will do.
Instead of trying to 100x or 1000x your effort, try to just 2x or 3x it. Give it small specific tasks and check the work thoroughly, use it as an extension of yourself rather than a separate "agent".
I can tell it to write a function and it'll do pretty well. I can ask it to fix things if it doesn't do it the way I want. This is all easy. Maybe I can even get it to write a whole class at once or maybe I can get it to write a class in a few iterations.
The key here is I'm in control, I'm doing the design, I'm making the decisions. I can ask it how I should approach a problem and often it'll have great suggestions. I can ask it to improve a function I've written and it'll do pretty well. Some times really well.
The point is I'm using it as a tool I'm not using it to do my job for me. I use it to help me think I don't use it to think for me. I don't let it run away from me and edit a whole bunch of files etc, I keep it on a tight leash.
I'm sold now. I am, indisputably, a better software developer with LLMs in my toolbelt. They help me write better code, faster, while learning things faster and easier, it's really good. Reliability isn't a problem when I keep a close eye on it. It's only a problem if you try to get it to do a whole big task on it's own.