Comment by davnicwil
I share your skepticism and think it's the classic pattern playing out, where people map practices of the previous paradigm to the new one and expect it to work.
Aspects of it will be similar but it trends to disruption as it becomes clear the new paradigm just works differently (for both better and worse) and practices need to be rethought accordingly.
I actually suspect the same is true of the entire 'agent' concept, in truth. It seems like a regression in mental model about what is really going on.
We started out with what I think is a more correct one which is simply 'feed tasks to the singular amorphous engine'.
I believe the thrust of agents is anthropomorphism: trying to map the way we think about AI doing tasks to existing structures we comprehend like 'manager' and 'team' and 'specialisation' etc.
Not that it's not effective in cases, but just probably not the right way to think about what is going on, and probably overall counterproductive. Just a limiting abstraction.
When I see for example large consultancies talking about things they are doing in terms of X thousands of agents, I really question what meaning that has in reality and if it's rather just a mechanism to make the idea fundamentally digestable and attractive to consulting service buyers. Billable hours to concrete entities etc.
On the other hand, LLMs are trained on enormous collections of human-authored documents, many that look like "how to" documents. Perhaps the current generation of LLMs are naturally wired for skill-like human language instructions.