Skip to content

You are viewing a free preview of this lesson.

Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.

What Are AI Agents?

What Are AI Agents?

AI agents are software systems that use large language models (LLMs) to autonomously perceive their environment, reason about goals, and take actions — going far beyond simple question-and-answer chatbots. This lesson defines what agents are, where they sit on the autonomy spectrum, how the core agent loop works, and where agents are being used in the real world.


Agents vs Chatbots

A common source of confusion is the difference between a chatbot and an agent. They both use LLMs, but they differ fundamentally in what they can do.

Feature Chatbot Agent
Interaction Single turn or multi-turn Q&A Autonomous multi-step task execution
Tool access None or minimal Rich tool library (APIs, databases, code)
Decision-making Responds to user input Plans, decides, and acts independently
Loop User → Model → User Perceive → Think → Act (repeat)
State Stateless or simple history Working memory, long-term memory
Error handling Returns "I don't know" Retries, re-plans, uses fallback tools

Key insight: A chatbot waits for you to ask a question. An agent pursues a goal.


The Autonomy Spectrum

Not every AI system needs full autonomy. Think of it as a spectrum:

 Low Autonomy                                         High Autonomy
 ──────────────────────────────────────────────────────────────────
 │  Autocomplete  │  Chatbot  │  Copilot  │  Agent  │  Multi-Agent │
 │  (suggestions) │  (Q&A)    │  (assist) │  (solo) │  (team)      │
Level Description Example
Autocomplete Predicts the next word or code token GitHub Copilot inline suggest
Chatbot Responds to user queries in conversation ChatGPT, Claude chat
Copilot Assists a human, suggests actions, human approves Coding assistants, email drafts
Agent Autonomously executes multi-step tasks with tools Research agent, coding agent
Multi-Agent Multiple agents collaborate on complex tasks Software dev team simulation

Choosing the right autonomy level depends on risk tolerance, task complexity, and how much human oversight you need.


The Core Agent Loop

Every agent, regardless of complexity, follows a fundamental loop:

          ┌─────────────┐
          │   PERCEIVE   │  ← Observe the environment
          └──────┬───────┘    (user input, tool results, memory)
                 │
                 ▼
          ┌─────────────┐
          │    THINK     │  ← Reason about what to do next
          └──────┬───────┘    (LLM generates plan / next action)
                 │
                 ▼
          ┌─────────────┐
          │     ACT      │  ← Execute an action
          └──────┬───────┘    (call a tool, return a response)
                 │
                 ▼
          ┌─────────────┐
          │   OBSERVE    │  ← Check the result
          └──────┬───────┘    (did it work? do we need more steps?)
                 │
                 └───────────── Loop back to PERCEIVE

Minimal Agent Loop in Python

from openai import OpenAI

client = OpenAI()

def simple_agent(goal: str, max_steps: int = 5) -> str:
    """A minimal agent loop."""
    messages = [
        {"role": "system", "content": (
            "You are an agent. To complete your goal, you can either:\n"
            "1. Respond with TOOL: <tool_name>(<args>) to use a tool\n"
            "2. Respond with DONE: <final_answer> when finished"
        )},
        {"role": "user", "content": f"Goal: {goal}"},
    ]

    for step in range(max_steps):
        # THINK — ask the LLM what to do
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages,
        )
        reply = response.choices[0].message.content

        # ACT — check if the agent wants to use a tool or is done
        if reply.startswith("DONE:"):
            return reply[5:].strip()

        if reply.startswith("TOOL:"):
            tool_result = execute_tool(reply[5:].strip())
            messages.append({"role": "assistant", "content": reply})
            messages.append({"role": "user", "content": f"Tool result: {tool_result}"})
        else:
            messages.append({"role": "assistant", "content": reply})

    return "Agent reached max steps without completing the goal."

Tip: The loop continues until the agent signals completion or hits a step limit. Always include a maximum step count to prevent runaway loops.


What Makes a Good Agent?

An effective agent has several key properties:

Property Description
Goal-directed Pursues a clear objective, not just responding to prompts
Tool-equipped Has access to external tools (search, code, APIs)
Memory-aware Maintains context across steps and sessions
Self-correcting Detects errors and retries with a different approach
Bounded Operates within safety limits (max steps, allowed actions)

Real-World Agent Examples

Agents are already deployed across many domains:

Coding Agents

User: "Fix the failing test in tests/test_auth.py"

Agent steps:
  1. Read the test file                    → TOOL: read_file("tests/test_auth.py")
  2. Run the test to see the error         → TOOL: run_command("pytest tests/test_auth.py")
  3. Read the source file under test       → TOOL: read_file("src/auth.py")
  4. Identify the bug and write a fix      → TOOL: edit_file("src/auth.py", ...)
  5. Re-run the test to confirm            → TOOL: run_command("pytest tests/test_auth.py")
  6. DONE: "Fixed the assertion error in the login function."

Research Agents

User: "Write a market analysis of electric vehicles in Europe."

Agent steps:
  1. Search for recent EV market data      → TOOL: web_search("EV market Europe 2025")
  2. Search for regulatory information     → TOOL: web_search("EU EV regulations 2025")
  3. Search for competitor analysis        → TOOL: web_search("top EV manufacturers Europe")
  4. Synthesise findings into a report     → THINK: combine all search results
  5. DONE: "## EV Market Analysis..."

Customer Service Agents

Step Action Tool
1 Greet customer and identify issue
2 Look up customer's order query_database(order_id)
3 Check return policy search_knowledge_base()
4 Process refund if eligible process_refund(order_id)
5 Confirm resolution with customer

Agent Frameworks Overview

Several frameworks simplify agent development:

Framework Strengths Language
LangGraph Graph-based workflows, state management Python
CrewAI Multi-agent teams with role definitions Python
AutoGen Multi-agent conversation patterns Python
Semantic Kernel Enterprise integration, .NET support Python/C#
Custom Full control, no framework overhead Any

Tip: Start with a custom agent loop (as shown above) before adopting a framework. Understanding the fundamentals makes frameworks much easier to use effectively.


When to Use an Agent (and When Not To)

Use an Agent When Avoid Agents When
Task requires multiple steps A single LLM call suffices
Tools are needed (search, code, APIs) The answer is in the LLM's knowledge
The path to the answer is not predetermined The workflow is fixed and predictable
Error recovery and retries matter Speed is critical and latency is a concern
Tasks are open-ended or exploratory The task is simple classification or extraction

Summary

  • An AI agent is a system that autonomously perceives, thinks, and acts to pursue a goal.
  • Agents differ from chatbots in their ability to use tools, plan multi-step solutions, and self-correct.
  • The agent loop (perceive → think → act → observe) is the fundamental pattern underlying all agent architectures.
  • Agents sit on an autonomy spectrum from autocomplete to multi-agent teams.
  • Real-world agents are used in coding, research, customer service, and many other domains.
  • Always include safety bounds (max steps, allowed tools) to prevent runaway execution.
  • Start with a simple custom loop before reaching for a framework.