Comment by victorbjorklund

Comment by victorbjorklund 13 hours ago

3 replies

"It’s becoming clear that real-world agentic systems work best when multiple agents collaborate, rather than having one agent attempt to do everything."

In a recent episode of Practical AI with the people behind All Hands:

"...when the Open Hands project started out, we were kind of on this bandwagon of trying to create a big agentic framework that you could use with and define lots of different agents. You could have your debugging agent, you could have your software architect agent, you could have your browsing agent and all of these things like this. And we actually implemented a framework where you could have one agent delegate to another agent and then that agent would go off and do this task and things like this.

One somewhat surprising thing is how ineffective this paradigm ended up being from two perspectives. So the first perspective is it didn't really and this is specifically for the case of software engineering. There might be other cases where this would be useful. But the first is in terms of effectiveness, we found that having a single agent that just has all of the necessary context, it has the ability to write code, use a web browser to gather information and execute code. Ends up being able to do a pretty large swath of tasks without a lot of specific tooling and structuring around the problems."

https://practicalai.fm/310

Not saying it is wrong. But I don't think it is something that is "clear" and we can take for granted so some benchmarks/reasoning why would have been great.

segmenta 12 hours ago

Thanks for the pointer. We do agree that not all agentic systems should be multi-agent.

Having said that, from our experience we see that for complex workflows e.g. customer support for enterprises, both quality and maintainability stands to gain when the system is decomposed into smaller scoped agents. We see a parallel of this in humans as well. For instance, when we call into customer support we get routed to different human agents based on our query.

OpenAI says something similar in their recent guide on building agents [0]: "For many complex workflows, splitting up prompts and tools across multiple agents allows for improved performance and scalability. When your agents fail to follow complicated instructions or consistently select incorrect tools, you may need to further divide your system and introduce more distinct agents."

A relevant benchmark here might be the Instruction Following benchmark: https://scale.com/leaderboard/multichallenge. Even the best reasoning models top out at ~60% accuracy on this.

The options to improve accuracy then, are (a) either fine-tune a model on this task specific dataset, (b) or decompose the problem into smaller sub-problems (divide-and-conquer) - this is more practical and maintainable.

[0] https://cdn.openai.com/business-guides-and-resources/a-pract...

[removed] 13 hours ago
[deleted]