Exploring Emerging LLM Agent Types and Architectures

The Evolution Beyond ReAct Agents
The shortcomings of first-generation ReAct agents have paved the way for a new era of LLM agents, bringing innovative architectures and possibilities.

In 2024, agents have taken center stage in the AI landscape. Companies globally are developing chatbot agents, tools like MultiOn are bridging agents to external websites, and frameworks like LangGraph and LlamaIndex Workflows are helping developers build more structured, capable agents.

However, despite their rising popularity within the AI community, agents are yet to see widespread adoption among consumers or enterprises. This leaves businesses wondering: How do we navigate these emerging frameworks and architectures? Which tools should we leverage for our next application?

Having recently developed a sophisticated agent as a product copilot, we share key insights to guide you through the evolving agent ecosystem.


What Are LLM-Based Agents?

At their core, LLM-based agents are software systems designed to execute complex tasks by chaining together multiple processing steps, including LLM calls. These agents:

  • Use conditional logic or decision-making capabilities.
  • Access working memory to retain context across steps.

The Rise and Fall of ReAct Agents

ReAct (reason, act) agents marked the first wave of LLM-powered tools. Promising broad functionality through abstraction, they fell short due to their limited utility and overgeneralized design. These challenges spurred the emergence of second-generation agents, emphasizing structure and specificity.


The Second Generation: Structured, Scalable Agents

Modern agents are defined by smaller solution spaces, offering narrower but more reliable capabilities. Instead of open-ended design, these agents map out defined paths for actions, improving precision and performance.

Key characteristics of second-gen agents include:

  • LLM Router Nodes: Decision points that direct the agent’s next steps based on input and context.
  • Components and Skills: Modular blocks (nodes or steps) designed for specific tasks, such as API calls or LLM queries.
  • Shared Memory: A state mechanism for retaining context, enabling seamless transitions between steps.

Common Agent Architectures

  1. Single Router with Functions
    • A basic setup where a router (an LLM, classifier, or code) directs a single functional call based on input.
  2. Single Router with Skills
    • The router manages workflows or skill sets, which consist of chained components executing more complex actions.
  3. Router with Branches and State
    • A more advanced system where the router navigates among skills, updates shared memory, and handles iterative LLM calls for dynamic interactions.

Agent Development Frameworks

Several frameworks are now available to simplify and streamline agent development:

  • LangGraph: Adapts graph-based architectures, allowing developers to define conditional edges and nodes for routing logic.
  • LlamaIndex Workflows: Uses event-driven logic, enabling agents to move seamlessly between steps based on incoming and outgoing events.
  • Others: Tools like CrewAI, Autogen, and Swarm cater to multi-agent coordination and other specialized use cases.

While frameworks can impose best practices and tooling, they may introduce limitations for highly complex applications. Many developers still prefer code-driven solutions for greater control.


Should You Build an Agent?

Before investing in agent development, consider these criteria:

  1. Does your application require iterative processing based on incoming data?
  2. Must the application adapt dynamically based on previous actions or feedback?
  3. Does it involve a non-linear state space with multiple potential pathways?

If you answered “yes,” an agent may be a suitable choice.


Challenges and Solutions in Agent Development

Common Issues:

  1. Long-term Planning: Agents often struggle to break down complex tasks into actionable steps or get stuck in execution loops.
  2. Malformed Tooling Calls: Errors in LLM or API calls require human intervention.
  3. Inconsistent Performance: Expansive solution spaces make it hard to achieve reliability and inflate costs.

Strategies to Address Challenges:

  • Constrain the Solution Space: Narrowing possible actions improves reliability.
  • Incorporate Domain Heuristics: Context-specific rules help guide decision-making.
  • Optimize Orchestration: Use code-based routers and workflows to reduce reliance on unpredictable LLM-based planning.

Conclusion

The generative AI landscape is brimming with new frameworks and fervent innovation. Before diving into development, evaluate your application needs and consider whether agent frameworks align with your objectives. By thoughtfully assessing the tools and architectures available, you can create agents that deliver measurable value while avoiding unnecessary complexity.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

author avatar
get-admin