Comment by copypaper

Comment by copypaper 14 hours ago

2 replies

Yea I don't understand how people are "leaving it running overnight" to successfully implement features. There just seems to be a large disconnect between people who are all in on AI development and those who aren't. I have a suspicion that the former are using Python/JS and the features they are implementing are simple CRUD APIs while the latter are using more than simple systems/languages.

I think the problem is that despite feeding it all the context and having all the right MCPs agents hooked up, is that there isn't a human-in-loop. So it will just reason against itself causing these laughable stupid decisions. For simple boilerplate tasks this isn't a problem. But as soon as the scope is outside of a CRUD/boilerplate problem, the whole thing crumbles.

physix 9 hours ago

I'd really like to know which use cases work and which don't. And when folks say they use agentic AI to churn through tokens to automate virtually the entire SDLC, are they just cherry picking the situations that turned out well, or do they really have prompting and workflow approaches that indeed increase their productivity 10-fold? Or, as you mention, is it possibly a niche area which works well?

My personal experience the past five months has been very mixed. If I "let 'er rip" it's mostly junk I need to refactor or redo by micro-managing the AI. At the moment, at least for what I do, AI is like a fantastic calculator that speeds up your work, but where you still should be pushing the buttons.

  • orderone_ai 6 hours ago

    Or - crazy idea here - they're just full of it.

    I haven't seen an LLM stay on task anywhere near that long, like...ever. The only thing that works better left running overnight that has anything to do with ML, in my experience, is training.