Have you heard of it? The concept of agentic AI has been around for some time, but now we have the tools to make it real. Let’s look at the past, present, and future of agentic AI—and what it means for you.
The Limitations of Current AI Systems
Large Language Models (LLMs) and Generative AI (GenAI) have gained prominence recently for their capacity to process and generate human-like text. These systems excel in understanding and responding to natural language. LLMs serve as the human-machine interface. Their primary function is to:
- Comprehend human language
- Identify requests
- Articulate responses effectively
LLMs act as a bridge between human intent and computational action. But while they are adept at understanding language, traditional AI systems (often based solely on standard machine learning) can struggle with complex requests, and may reach their limits when faced with nuanced or multifaceted problems. That’s where Agentic AI comes in. This type of AI integrates multiple AI frameworks to create more sophisticated and responsive systems.
The Rise of Agentic AI
Agentic AI addresses this gap by integrating various AI methodologies, allowing it to handle complex tasks more effectively. By leveraging both the linguistic capabilities of LLMs and the analytical processes of other AI frameworks, Agentic AI can tackle intricate problems that require a deeper understanding and multi-faceted approaches. Agentic AI represents a higher level of AI capability by combining the strengths of LLMs with other machine learning frameworks, enabling it to respond to and solve complex human requests.
Understanding How Agentic AI Is Developed and Deployed
Agentic AI is based on a distributed system, also called a multi-agent system. Routines are outsourced to specialized sub-AIs, which are trained for particular tasks. The example of self-driving cars helps us understand the real challenge: breaking a problem into smaller pieces, sorting it into resolver groups, and putting it all together meaningfully once the results are in. The whole construct has to decide which mathematical model to apply when there is no “sandbox” (a controlled testing environment where models can be experimented with safely). While the AI of a self-driving car has received extensive training, the training cannot cover all situations that occur during a road trip. Think of the infamous trolley problem.
A less critical use case exists in logistics. If a shipment starts traveling from A to B, there are many moving parts in between, and many things could go wrong. There was a delay here, a missed connection there, and some trouble with documentation at the customs office. There are multiple little roadblocks, but it is still a problem that a machine can solve once it's able to make decisions like altering the route or changing the means of transportation. That decision is based on multiple agents working on individual moving parts, providing a range of possible resolutions, while the greater construct keeps its eye on the big picture. Only exceptional cases demand human intervention.
The Risks of Agentic AI
Agentic AI has a promising future, but it's not all bright and shiny. There are several concerns. We might have heard the saying, "With great power comes great responsibility," and while that applies to you and me, an AI might not understand the concept of accountability. As soon as a machine makes decisions that directly impact human life, we need to ask a few questions:
- Who trained the AI, and what data has been used?
- Does the routine apply any form of self-control?
- Can the decision-making process be reproduced?
The essential question—do we want a machine to make that decision for us?
You and I might answer that last one in different ways. We already see rules and laws coming into place, and while it can be a balancing act, it's good to be prepared. Agentic AI may prove risky in extreme cases, but thankfully, there are plenty of straightforward scenarios for these systems to improve our lives.
Future Use Cases for Agentic AI
I looked at my IT requests from last year and tried to imagine which requests really needed human interaction. I invite you to try that experiment. You might see a glimpse of the future. In my cases, there was one "classic" amongst them—I forgot the password to a resource I rarely use, so someone had to reset it for me. That could have been a machine. In another case, I asked for new equipment, which was a little more complicated. Someone has to approve it, someone else will source it and initiate the purchasing process, and finally, there's shipping. Could different AI agents have taken care of the whole process? Most likely, yes. The approval part is tricky; perhaps we could develop a machine learning model that analyzes past approvals and user profiles to streamline this process. In the world of AI, solutions to this and many other complexities are incoming.
Watch this space.