New case studies 1

Introduction
Large Language Models have changed the way we look at creating content. Whether it is generating code, writing a blog, or brainstorming new ideas, LLMs can really help in overcoming the creator’s block.

However, LLMs operate within fixed boundaries, like they just respond but do not act on the request made. They can process the information, but they cannot decide what to do next.

That’s why, to overcome this, we have Agentic AI. Unlike static LLMs, agentic systems can plan, execute, and adapt dynamically to achieve specific goals. They produce outputs, but they are more action-based on reasoning, context, and environmental feedback.

Limitations of LLMs
Before we move to Agentic AI, let’s understand the shortcomings of LLMs, which paved the way for Agentic AI.

Rely on the existing data; no access to the latest data
LLMs are trained on vast but static datasets. Once trained, they can’t access or learn from new information unless updated or fine-tuned. This means their responses may not reflect recent developments, research, or real-time changes in data.

Access to non-public data
LLMs cannot directly access proprietary databases, internal systems, or confidential documents. Their output is limited to the information available during training or what’s explicitly provided in the prompt. So, publicly available LLMs, like GPT, LaMDA, PALM-2, and BERT, are unsuitable for tasks that require organization-specific insights or secure data retrieval, and building an in-house LLM can be time-consuming and resource-intensive.

Lack of long-term memory
Traditional LLMs operate on a session-by-session basis; they don’t retain information or context across interactions. Once a conversation ends, the model “forgets” past inputs. So, LLMs cannot build persistent understanding or improve based on previous exchanges, unless there is a feature that allows you to save the new instructions in the LLM’s memory.

Prone to hallucinations and inaccuracies
Because LLMs predict the most probable next word rather than verifying factual accuracy, they can sometimes generate incorrect or fabricated information. These hallucinations make them unreliable for scenarios that demand factual precision, such as legal, medical, or research-based content.

Limited reasoning and autonomy
LLMs lack the ability to plan or make decisions beyond the immediate prompt. They cannot execute multi-step tasks, adapt strategies based on feedback, or act independently. Each task requires explicit human instruction, restricting its usefulness in dynamic or goal-driven applications.

How do LLMs and Agentic AI Approaches to a Task differ from One Another?
One important thing to understand is that LLMs produce the output that we request from them. So, for instance, you want to set up a new development environment for a Python web app using FastAPI, Docker, and PostgreSQL.

So, an LLM, like GPT-4 can generate step-by-step instructions or even write the required Dockerfile, docker-compose.yml, and setup scripts.  Once you get this, you may have to do some manual debugging and setup as per the environment.

On the other hand, Agentic AI can actively plan, test, and iterate using tools and reasoning loops. Here’s how Agentic AI could break down and finish the task when you ask it to set up:

Reads your goal and creates a plan: generate the Docker setup, install dependencies, start containers, and run a health check.
Uses connected tools, like a shell executor, file system access, and an environment monitor, to run commands.
Diagnoses the error (if it occurs) by reading the error log, searches for the fix, updates the script, and retries, all autonomously.
Documents the environment for future use.
So, an Agentic AI system could do things more end to end requiring no (or very less) manual intervention.

What is an AI Agent and Agentic AI?
An Agent is a decision making component which uses LLM to determine which actions to take and in what order – based on user input and the tools available.