Insights
On this page
When designing an LLM based agentic system, one of the most important choices that you need to make is the extent of the decision-making that is encapsulated in an LLM model versus explicit software.
To help understand, we can think of this choice as a spectrum between the following approaches:
For high-risk, production workflows, we typically recommend using more router-based features, reserving orchestrators for applications demanding flexible, general-purpose conversations.
Router-Based Architectures
Router agentic systems:
The following is a toy example of an Airline Chatbot Booking agent that uses the ‘Router approach’. We can see that while the LLM helps us to classify the intent of the question from three possible choices, it is ultimately our software that maps this intention to a template text response. Because the LLM is highly constrained, it means the user will experience more consistent behaviour.
Orchestrator Architectures
In contrast to the router system, orchestrator agentic systems:
The following example applies an orchestrator approach to the same toy airline problem. Instead of letting software decide what response is appropriate, decision-making is delegated to the LLM layer. Here we have a multi-agent system where a ‘head’ orchestrator agent triages the user query and hands off to an agent specifically designed for changing flights and it ultimately provides the response to the user.
Under this example, the LLM layer is playing the role of classifier, router and response writer. In the Router example, it was just playing the role of classifier (with software handling the rest).
Where possible, we recommend using router-based because they offer the following advantages:
The downsides of router approaches are that they can be rigid, inflexible and can struggle in more open-ended problems. A chatbot that always replies with the exact same responses might be considered boring / stagnant by its users.
Orchestrator designs have powerful capabilities:
Using frameworks like Pydantic-AI or OpenAI’s Agents SDK makes orchestration straightforward and fast to implement. This means it is great for demos or proof of concepts.
The downsides of this approach is that:
Reader note: while model capability is changing quickly, it is unlikely the below will change in the near future.
Determine the scope of your problem.
Answering ‘yes’ to either of the above suggests router features would be better.
Where possible, we recommend using router approaches as long as it allows and as a general principle, if a part of your system can be expressed in code, then express it in code (i.e. don’t overuse LLMs when not required).
Where the limits of these are reached, some of the open-ended orchestrator advantages can be replicated in a constrained manner. For example:
The ‘Planning’ and ‘Iterative Combination of Outputs’ are undoubtedly much harder to achieve in a rigid router system however. So when a task requires these (as determined by an LLM classifier or some other logic), we suggest creating a more unconstrained orchestrator branch in your system.
Your choice between router and orchestrator architectures should reflect your application's clarity, complexity, and interaction style. Router-based approaches currently provide reliability, efficiency, and ease of testing for clearly-defined tasks. Orchestrators offer better flexibility for broader, conversational interactions.
As LLMs continue to advance, the balance between these approaches may evolve. At Tomoro.ai, we lean towards router-based or hybrid architectures for production workloads and reserve orchestrators for open-ended problems demanding dynamic, human-like engagement.
Tomoro works with the most ambitious business & engineering leaders to realise the AI-native future of their organisation. We deliver agent-based solutions which fit seamlessly into businesses’ workforce; from design to build to scaled deployment.
Founded by experts with global experience in delivering applied AI solutions for tier 1 financial services, telecommunications and professional services firms, Tomoro’s mission is to help pioneer the reinvention of business through deeply embedded AI agents.
Powered by our world-class applied AI R&D team, working in close alliance with OpenAI, we are a team of proven leaders in turning generative AI into market-leading competitive advantage for our clients.
Continue Reading
We’re looking for a small number of the most ambitious clients to work with in this phase, if you think your organisation could be the right fit please get in touch.