+

Data Feast Weekly

Get weekly insights on modern data delivered to your inbox, straight from our hand-picked curations!

The Power Combo of AI Agents and the Modular Data Stack: AI that Reasons
The Power Combo of AI Agents and the Modular Data Stack: AI that Reasons

The Power Combo of AI Agents and the Modular Data Stack: AI that Reasons

13 min
|
What are AI Agents, Types, Workflows, Leverage of Consistent Design Across Data & AI Stacks, Solutions for LLM Limitations, and much more!
Oct 24, 2024
Data Strategy

Originally published on

Modern Data 101 Newsletter

,

the following is a revised edition.

Today, more than 60% of CIOs are integrating AI into their innovation strategies—making AI a good-to-use stack and a compulsory strategic confidant! The last few years have seen such sudden surges in these advances that in no time, we see a new development making its way into the orgs’ workflows or, at the very least, becoming the focus of countless boardroom pitches.

While we are all discussing LLMs, there’s now a simultaneous thrust of agentic AI across industries. Because while the first wave of LLMs cast a magic spell on ambitious enterprise leaders, it also broke the spell on implementation when the AI layer started demanding more than the usual effort to set up the right prompts, validate the outcomes (which were mostly subpar or too generic for any strategic use), and then figure out why and how the outcomes went wrong, and then find out the right results.

The next wave naturally resulted in a wider adoption of AI agents, which were more “reasonable” and, therefore, closer to the human intellect and empathy, which were desirable for logical operations as well as for strategic aid.

By the end of this article, you’ll have a working knowledge of what AI agents are, why they give additional leverage relative to generic LLMs when measuring ROI on AI initiatives, and how to ensure the AI Agents' success.

What are AI Agents?

Answer First

AI Agents are a collection of AI solutions (e.g., LLMs), each tasked with solving a small problem. Each of these solutions is connected through a workflow within the bounds of the AI Agent to solve a larger and more complex purpose.

This ability to break down a complex problem into smaller subsystems gives AI Agents the apparent ability of “reasoning” and more advanced problem-solving skills compared to standalone LLMs.

Here are some of the definitions in the market from leaders in the tech and research fields.

An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.
~ as defined by IBM.
Intelligent agents in AI are goal-driven software entities that use AI techniques to complete tasks and achieve goals. They don’t require explicit inputs and don’t produce predetermined outputs. Instead, they can receive instructions, create a plan and use tooling to complete tasks, and produce dynamic outputs. Examples include AI agents, machine customers and multiagent systems.
~ as defined by Gartner.
An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.
~ as defined by Amazon.

The Rationale Behind Why Agents Work

Ultimately, what are we aiming to do with AI solutions? Solving big problems or automating most of the solutioning with quicker and more accurate results.

But the fundamental approach or framework to efficient problem-solving remains the same.
Be it an AI or a human tackling the big problem.

AI is just enabling cost-effective delegation of most of the cognitive overload involved in problem-solving.

What is this fundamental approach to tackling big or complex problems?

Breaking down complex problems into the smallest possible independent units and solving each to tackle a higher-order complex solution. This approach was standardised or formalised by a very early philosopher and came to be known as deductive reasoning.

Problem solving techniques. How do we solve problems? The answer is… | by  Gediminas Gerulaitis | Zedge Engineering | Medium
Fundamental Approach to Solving Large or Complex Problems | Source: Problem Solving Techniques

The hierarchical model is as good as a law. Great scholars, the likes of Descartes, identified this pattern as a fundamental method for solving complex problems.

This pattern does not just appear in the numeric system but has appeared time and again in successful designs. Exhibit:
🎯 Amazon’s host of building-block-like solutions for autoscaling
🎯 Descartes’ method of Deduction for tackling mammoth problems
🎯 The Art of Watchmaking from Donella's Thinking in Systems
🎯 Data Developer Platforms (DDP) with ready-to-uses resources as building blocks
🎯 The Kubernetes architecture
and many more!

Design Application: Consistency Across Both Data & AI Stack

Now, a quick jump to one of the top design minds in the business world: Apple.

Let’s take one of their design verticals: Minimalism. For minimalism to truly effect at scale and be recognised as a brand value, they have applied it across the entire cross-section of Apple products, as well as across the breadth of their organisation- be it their building designs or their minimal team structures (quoting Tim Cook on Steve Jobs: “He believed small teams could do amazing work and I loved that vision).

Another example for reference would be the naturally occurring Fibonacci pattern or even the simple hierarchical pattern that takes effect across natural sub-systems.

For a fundamental design pattern to take effect, it must be reflected in the last atom of the ecosystem. This implies a consistent design across the data stack as well as the AI Stack.

We’ve thoroughly talked about our approach to the data stack before. We go by the Data Developer Platform’s ideology, which places a consistent emphasis on the building-block-like approach to problem-solving (tackling big data problems with grace).

The AI Layer populated with purpose-driven AI Agents fits in perfectly with this underlying data layer, enabling flexible permutations and combinations of the AI Agents with lower-order subsystems or resources in the data stack, such as compute-as-a-building-block or contract-as-a-building-block.

In such a stack, all developers need to do is create instances of existing resources (existing as templates) and bundle them together to ensure transparency in resource usage, cost, effort, and debugging.

Representative Image of Consistent Design Application | Source: Authors

Limitations of LLMs—Why do we need AI Agents?

Agency: A Desired and Missing Quality in the AI Workforce

Agency = the ability to take action or to choose what action to take.
~ as defined by Cambridge

Non-Agentic AI, such as standalone LLMs, aren’t able to take up agency like qualified humans, which creates the “Agency Gap”. Generic responses, vague advisory, and ineffective strategic abilities are common in standalone AI.

AI Agents, on the other hand, by breaking the complex problem into smaller pieces, are able to create a logically viable response for other AI sub-systems in the Agent’s workflow, enabling consecutive LLMs to take qualified decisions and “act”. All while taking into consideration real-time data and user inputs (due to its multiple working limbs).

By 2028, 33% of enterprise software applications are believed to include agentic AI, enabling 15% of day-to-day work decisions to be made autonomously.

The image shows the low adoption of agentic AI at present, illustrating the AI Agency gap across different environments
“AI agents are like magic. They’re these magical entities that go beyond typical chatbots” — said Patrick Hamelin, software engineer lead at Botpress.

These AI agents are marked by four salient characteristics:

  1. Autonomy: They function independently, without the need for continuous human oversight.
  2. Reactivity: They sense their surroundings and quickly adapt to changes.
  3. Proactivity: They act independently, pursuing tasks that align with their goals.
  4. Social Ability: They interact with other agents or humans as needed.

A Real-World Example of an Agent’s Superiority over Standalone LLMs

Let’s take a real-world example of supply chain optimisation.

Imagine a supply chain manager is facing an issue with product shortages at certain warehouses. The manager consults an LLM to get advice on how to address this. The manager asks the LLM something like, “How can I optimize my supply chain to avoid inventory shortages?

The LLM provides a well-written response, explaining general strategies such as improving demand forecasting, increasing buffer stock, or enhancing supplier communication. It might also suggest investing in predictive analytics or using more efficient shipping methods.

While some of these strategies can be helpful, the LLM does not know the real-time status of inventory, pending shipments, or the current disruptions affecting the supply chain.

The manager now needs to take the LLM’s advice and manually check data systems, review supplier information, and coordinate with teams to implement the suggestions. The decision-making and execution rely entirely on the manager, and the LLM plays a passive advisory role.

The key limitations we observe here are:

  • Limited knowledge
  • Limited data scope
  • No interface with the real world
  • Lack of context leading to poor decision-making and low accuracy

The LLM provides static, general advice but falls short of analysing dynamic factors like current stock levels, shipping delays, or upcoming supply chain disruptions. It also cannot take action or adapt its suggestions based on real-time changes in the supply chain environment.

While these might be the key challenges in the supplier’s story, LLMs often hallucinate and offer poor accuracy.

How does an AI Agent Overcome these Limitations?

In this scenario, an AI Agent can offer the following capabilities to reduce the user-LLM gaps.

  • Enhanced user interaction with better context: The manager doesn't need to query the AI agent for advice. Instead, the AI agent is already integrated into the supply chain management system, constantly monitoring key metrics like inventory levels, supplier delivery times, and shipping schedules.
  • Leveraging real-time data: The AI agent continuously gathers real-time data from various sources, such as warehouse sensors, ERP systems, and even external sources like weather forecasts and traffic reports (to anticipate shipping delays).
  • Allowing dynamic decision-making:
Source: Authors

The key advantages agentic AI brings in here:

  • Acts in real-time, making decisions based on live data and continuously optimizing supply chain operations.
  • Dynamically adjusts to changing conditions, whether it’s inventory fluctuations or supplier delays.
  • Autonomously handles the situation without waiting for human intervention.

These decision-making capabilities humanise the entire process of leveraging an AI system with high-quality output and consumer experiences, like in the supplier example.

Organisations, hence, require a complete package of LLMs and AI agents where you can utilise the benefits of LLMs while adding the reasoning factor of AI agents.

AI Agents = LLMs + Reasoning Ability = Improved Quality of Outcomes / Enhanced User Experience

How do AI Agents work?

What we illustrated previously is just a narrow scope of what AI agents can do and how.

The Scope of AI Agents | Source: AWS

The working of these AI Agents is categorised into four key steps at a high level.

Perception & Data Collection

AI agents gather data (also real-time data) from multiple sources like customer interactions, transaction histories, and even social media. This provides the context needed to understand customer queries. Advanced agents can process this data in real time, ensuring they have the most current information to respond effectively.

Decision-making

By leveraging deep learning models, AI Agents analyze the data, spot patterns, and make informed decisions. For instance, the agent might predict a potential shipment delay in a supply chain scenario based on historical logistics data and real-time weather updates. It can then reroute deliveries or adjust inventory orders accordingly. With every scenario, the agent refines its decision-making process, becoming more accurate and proactive over time.

Action-execution

Once a decision is made, the agent executes actions such as answering a query, processing a request, or escalating complex issues to a human. The focus is on efficiency and accuracy, ensuring smooth customer experiences.

Learning and Adaptation

AI agents continuously evolve by learning from each interaction. They adjust their algorithms and update their knowledge base to improve accuracy and effectiveness, staying aligned with changing customer expectations and business needs.

The working of AI Agents | Source: Authors

Types of AI Agents

When diving into the world of AI, one quickly discovers that not all AI agents are created equal. Understanding the types of AI agents is crucial, especially as businesses look to integrate more advanced AI into their operations.

Types of AI Agents | Source: Authors

Simple Reflex Agents

Operating on a ‘condition-action’ rule, these agents make decisions purely based on their current perceptions without any memory or deeper understanding of the environment. They respond to specific stimuli but lack the ability to learn or adapt. This makes them suitable for straightforward tasks like basic customer service chatbots but limits their effectiveness in dynamic or complex industry scenarios requiring deeper reasoning.

Model-based Reflex Agents

These agents are more advanced and equipped with an internal model of the world that enables them to go beyond immediate perceptions. They can infer unseen aspects of their environment and make decisions even when some information is missing. By leveraging context and filling in knowledge gaps, model-based reflex agents demonstrate greater adaptability and autonomy, thereby becoming better suited for complex and ever-changing environments.

Utility-based Agents

Utility-based agents are designed to make decisions that maximize outcomes according to specific goals or preferences. By leveraging complex reasoning algorithms, these agents compare various scenarios and their associated utility values, essentially weighing the pros and cons of different actions. Unlike simple rule-based or reactive agents, utility-based agents don’t just follow predefined paths—they calculate the best course of action to achieve the highest possible benefit for the user.

For instance, a utility-based agent can help users find the best flight by prioritizing factors like travel time over cost. If a user values shorter flights more than saving money, the agent will recommend the quickest option, even if it's more expensive. This adaptability allows utility-based agents to align with user preferences or shifting business needs.

Goal-based Agents

Goal-based agents, also known as rule-based agents, possess advanced reasoning abilities. In addition to analyzing environmental data, these agents evaluate multiple strategies to achieve their desired objectives, always selecting the most efficient path. They are particularly well-suited for handling complex tasks like natural language processing (NLP) and robotics applications.

Learning agents

Learning agents evolve and improve through reinforcement learning, making them valuable in fast-paced industries where staying ahead of trends is crucial. For example, a virtual assistant can continuously enhance its service by learning more about customer preferences and adapting accordingly.

Hierarchical agents

Hierarchical agents operate within a layered structure, where higher-level agents oversee and direct lower-level agents towards a shared objective. This approach helps businesses break down complex, multi-step processes into manageable tasks, with each AI agent focusing on specific responsibilities.

Challenges Organisations Face While Leveraging AI Agents

We’ll not go into details of the problems, given the scope of this article is focused on the solutions. But here’s a summary/overview of some of the risks and challenges.

  • Using agents without effective governance (risking reputation and customer relations)
  • Relying on low-quality data (risking untrustworthy AI that users cannot use with confidence)
  • Lower accuracy of outcome (an unworthy tradeoff for automation and speed)

How a Data Product Platform Addresses the Risks and Enables AI agents or the AI Layer?

While we acknowledge the importance of AI Agents, it is also essential that we streamline the process of deploying these AI Agents seamlessly while addressing their risks and reaping the optimal benefits.

A data product platform (such as a DDP) allows developers to reduce the complexities and manual efforts of streamlining the APIs for AI agents to interact with databases and warehouses. Imagine having an API builder stack in a data developer platform functioning within its consumption layer.

These specific stacks within DDP enable AI Agents more efficiently by:

Supplying AI-Ready Data

There has been a lot of discussion about what AI-ready data needs to embody. For instance, it needs to be quality-approved and well-governed, as well as fit for the purpose. The quality aspects are varied based on the specific purpose. This is one of the most perfect use cases for Data Products, which enable an organisation’s data to be purpose-driven with the ability to define purpose-specific quality requirements as well as govern the data within the bounds of the use case or domain.

Context Cushioning

AI can process tons of data and tons of good-quality data, but when it’s also fed context, it reaches the highest levels of reasoning, which most LLMs often lack.
The deficiencies in reliable reasoning stem not just from a lack of deductive LLMs but also from a lack of good context around the data.

This is where a reliable semantic layer comes into the picture and adds that sprinkle of contextual cushioning for AI. How do we know it’s a reliable layer? When it’s fed by a host of Data Products from underlying data layers alongside siloed semantics (which cannot be avoided in realistic data ecosystems).

Learn more about it in this quick LinkedIn post.

Abstracting Complexity for Speed

By simplifying the process of writing and managing database queries, enabling rapid development and integration of these APIs into various AI agents. These help manage database interactions and simplify API creation by allowing developers to focus on building AI functionality, reducing development cycles and enabling rapid integration of AI into applications.

Enhancing Scalability

Template-driven APIs can allow the platform to scale with changing data schemas or business logic, ensuring that AI agents can continuously access relevant, up-to-date information without manual adjustments.

Improved Accessibility for AI agents

With clear, well-documented APIs, AI agents can access real-time data faster, leveraging machine learning and analytics for enhanced decision-making and automating operational tasks.

Standardization and Interoperability

Using OpenAPI standards can ensure seamless interaction between AI agents and various data sources, promoting compatibility across diverse systems. This makes AI agents more adaptable in complex ecosystems.

Improving Caching and Performance

Dataset caching reduces query loads, enabling faster API responses. This ensures AI agents can act on time-sensitive data efficiently without causing delays due to system bottlenecks.

Expanding on the Idea of leveraging Agentic AI over the Semantic Layer

While we plan to share dedicated research on this combination, here’s a brief teaser.

Deploying Agentic AI Apps | Source: Authors

What should be the right data stack to augment the AI stack, and how does the sandwiched layer of a robust semantic layer further accentuate the capabilities of agentic AI? How does this combination add the required context to bridge the gap between users and data and improve the accuracy of the AI outcome?

High-Precision Context-Matching for High LLM Success, with ample context around data | Image by Authors

We’ve talked about enhancing LLM efficiencies with an E2E data stack before, and plan to do a much more detailed piece on the Agentic AI Stack soon, which would cover the power combo of Agents (instead of standalone LLMs) with underlying Data Products and Self-Serve Infrastructures. We’ll go on to understand how one AI Layer across cross-functions could become the single source of truth to consume data. Stay tuned, and thanks for reading Modern Data 101!

Upcoming: Building the End-to-End Agentic AI Stack. Sign up to be the first to know!

MD101 Support 📞

If you have any queries about the piece, feel free to connect with any of the authors (details in Author Connect below). Or feel free to connect with the MD101 team directly at community@moderndata101.com 🧡

Author Connect 🖋️

Find me on LinkedIn 🙌🏻

Find me on LinkedIn  🙌🏻


From The MD101 Team

Bonus for Sticking With Us to the End!

🧡 The Data Product Playbook

Here’s your own copy of the Actionable Data Product Playbook. With 800+ downloads so far and quality feedback, we are thrilled with the response to this 6-week guide we’ve built with industry experts and practitioners. Stay tuned on moderndata101.com for more actionable resources from us!

DOWNLOAD!