\n\n\n\n LangChain for AI Agents: Complete Tutorial - AgntHQ \n

LangChain for AI Agents: Complete Tutorial

📖 12 min read2,218 wordsUpdated Mar 26, 2026

LangChain for AI Agents: Complete Tutorial

AI agents are autonomous software entities that can perceive their environment, make decisions, and take actions to achieve specific goals. They represent a significant advancement in how we interact with and build intelligent systems. If you’re looking to understand the core components and practical implementation of AI agents, start with The Complete Guide to AI Agents in 2026. LangChain stands out as a powerful framework for constructing these agents, providing the necessary abstractions and tools to integrate large language models (LLMs) with external data sources and computational capabilities. This tutorial will guide you through building AI agents using LangChain, focusing on the practical aspects and underlying mechanisms.

Understanding LangChain Agent Fundamentals

At its core, a LangChain agent orchestrates the interaction between an LLM and a set of tools. The LLM acts as the agent’s “brain,” deciding which tool to use and with what input, based on the current objective and observed state. Tools are functions that perform specific tasks, such as searching the web, querying a database, or executing code.

The main components of a LangChain agent are:

  • LLM: The language model responsible for reasoning and decision-making.
  • Tools: Functions the agent can call to interact with the external world.
  • Prompt: Instructions given to the LLM, guiding its behavior and tool usage.
  • Agent Executor: The runtime that manages the agent’s loop, passing observations and actions between the LLM and the tools.

Consider an agent designed to answer questions about current events. It might have a tool to perform web searches and another articles. The LLM, given a query, would decide whether to search the web, execute the search, observe the results, and then potentially summarize them before formulating a final answer.

Setting Up Your LangChain Environment

Before building an agent, ensure you have LangChain installed and access to an LLM. We’ll use OpenAI’s models for this tutorial, but LangChain supports many others.


pip install langchain langchain-openai

You’ll need to set your OpenAI API key as an environment variable:


import os
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

For simpler examples, you can instantiate the LLM directly:


from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0)

The `temperature` parameter controls the randomness of the LLM’s output. A value of 0 makes the output more deterministic and factual, suitable for agent decision-making.

Building Your First LangChain Agent: A Simple Search Agent

Let’s construct an agent that can answer questions using web search. This requires a web search tool. LangChain provides integrations for various search providers. We’ll use the `TavilySearchResults` tool, which is often a good default.

First, install the necessary package and set your Tavily API key:


pip install langchain-community tavily-python

import os
os.environ["TAVILY_API_KEY"] = "YOUR_TAVILY_API_KEY"

Now, let’s define the tools and instantiate the agent.


from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain import hub
from langchain_community.tools.tavily_search import TavilySearchResults

# 1. Initialize LLM
llm = ChatOpenAI(temperature=0, model="gpt-4o")

# 2. Define Tools
search_tool = TavilySearchResults(max_results=3) # Limit to 3 search results
tools = [search_tool]

# 3. Pull the ReAct Agent Prompt from LangChain Hub
# The ReAct pattern (Reasoning and Acting) is a common approach for agents.
# It prompts the LLM to generate a thought process before taking an action.
prompt = hub.pull("hwchase17/react")

# 4. Create the Agent
# create_react_agent constructs an agent that uses the ReAct prompt.
agent = create_react_agent(llm, tools, prompt)

# 5. Create the Agent Executor
# The AgentExecutor is responsible for running the agent's decision loop.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 6. Run the Agent
response = agent_executor.invoke({"input": "What is the capital of France and what is its current population?"})
print(response["output"])

In this example:

  • We initialize `ChatOpenAI` as our LLM.
  • We create a `TavilySearchResults` tool.
  • We pull a standard ReAct prompt from `langchain_hub`. This prompt guides the LLM to think step-by-step (Thought) and then decide on an Action and Action Input.
  • `create_react_agent` combines the LLM, tools, and prompt into an agent.
  • `AgentExecutor` runs the agent, handling the sequence of observations and actions. The `verbose=True` flag is crucial for debugging, as it shows the agent’s internal thought process.

The output with `verbose=True` will show the agent’s “Thought,” “Action,” “Action Input,” and “Observation” at each step, demonstrating its reasoning.

Adding More Complex Tools and Capabilities

Agents become truly powerful when they can interact with various systems. Let’s extend our agent with a tool to perform calculations.


from langchain.agents import AgentExecutor, create_react_agent
from langchain import hub
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.tools import tool # Decorator for simple tool creation

# 1. Initialize LLM
llm = ChatOpenAI(temperature=0, model="gpt-4o")

# 2. Define Tools
search_tool = TavilySearchResults(max_results=3)

@tool
def calculator(expression: str) -> str:
 """Evaluates a mathematical expression and returns the result."""
 try:
 return str(eval(expression))
 except Exception as e:
 return f"Error evaluating expression: {e}"

tools = [search_tool, calculator]

# 3. Pull the ReAct Agent Prompt
prompt = hub.pull("hwchase17/react")

# 4. Create the Agent
agent = create_react_agent(llm, tools, prompt)

# 5. Create the Agent Executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 6. Run the Agent with a new query
response = agent_executor.invoke({"input": "What is 12345 times 67890? Also, find out who won the last World Cup."})
print(response["output"])

Here, we used the `@tool` decorator to quickly define a `calculator` tool. This decorator automatically infers the tool’s description and argument schema, which the LLM uses to understand how to call it. The agent will now decide whether to use the `calculator` or `TavilySearchResults` based on the input query. This modularity is key to building sophisticated AI agents. For more advanced multi-agent coordination, frameworks like CrewAI Multi-Agent Systems Guide offer powerful abstractions.

Agent Types and Their Use Cases

LangChain offers several agent types, each with different strengths and underlying decision-making mechanisms:

  • `create_react_agent` (ReAct Agent): Uses the ReAct (Reasoning and Acting) pattern. The LLM generates a “Thought” (internal monologue) and then an “Action” (tool call) and “Action Input.” This is a highly effective and widely used approach for general-purpose agents.
  • `create_json_agent`: Designed for agents that interact with APIs expecting JSON inputs and outputs. The LLM is prompted to generate JSON-formatted tool calls.
  • `create_openai_functions_agent`: uses OpenAI’s function calling capabilities. The LLM directly outputs a structured object indicating the tool to call and its arguments, which can be more reliable than parsing text. This is often the preferred choice when using OpenAI models.

Choosing the right agent type depends on your specific use case and the LLM you’re employing. For most general tasks with OpenAI models, `create_openai_functions_agent` is an excellent starting point due to its solidness. Let’s look at an example using it.


from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.messages import SystemMessage, HumanMessage
from langchain.tools import tool
from langchain_community.tools.tavily_search import TavilySearchResults

# 1. Initialize LLM
llm = ChatOpenAI(temperature=0, model="gpt-4o")

# 2. Define Tools
search_tool = TavilySearchResults(max_results=3)

@tool
def current_time(format_str: str = "%Y-%m-%d %H:%M:%S") -> str:
 """Returns the current date and time in the specified format."""
 import datetime
 return datetime.datetime.now().strftime(format_str)

tools = [search_tool, current_time]

# 3. Define the Agent Prompt (using OpenAI functions agent specific messages)
system_message = SystemMessage(
 content="You are a helpful AI assistant. Use the available tools to answer questions."
)
prompt = [system_message, HumanMessage(content="{input}"), MessagesPlaceholder(variable_name="agent_scratchpad")]

# 4. Create the Agent using create_openai_functions_agent
agent = create_openai_functions_agent(llm, tools, prompt)

# 5. Create the Agent Executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 6. Run the Agent
response = agent_executor.invoke({"input": "What is the current time and what were the main news headlines yesterday?"})
print(response["output"])

Notice the `MessagesPlaceholder(variable_name=”agent_scratchpad”)` in the prompt. This is where the agent’s intermediate thoughts, actions, and observations are injected into the conversation history, allowing the LLM to maintain context.

Managing Agent State and Memory

For agents to perform complex, multi-turn interactions, they need memory. LangChain provides various memory components to store and retrieve conversational history. The `AgentExecutor` can be configured with memory.


from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.messages import SystemMessage, HumanMessage, MessagesPlaceholder
from langchain.tools import tool
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.memory import ConversationBufferWindowMemory # Import memory

# 1. Initialize LLM
llm = ChatOpenAI(temperature=0, model="gpt-4o")

# 2. Define Tools
search_tool = TavilySearchResults(max_results=3)
tools = [search_tool]

# 3. Define the Agent Prompt
system_message = SystemMessage(
 content="You are a helpful AI assistant. Use the available tools to answer questions. Keep your responses concise."
)
prompt = [
 system_message,
 MessagesPlaceholder(variable_name="chat_history"), # Placeholder for chat history
 HumanMessage(content="{input}"),
 MessagesPlaceholder(variable_name="agent_scratchpad")
]

# 4. Create Memory
# ConversationBufferWindowMemory keeps a window of the last 'k' interactions.
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True, k=3)

# 5. Create the Agent
agent = create_openai_functions_agent(llm, tools, prompt)

# 6. Create the Agent Executor with memory
agent_executor = AgentExecutor(
 agent=agent,
 tools=tools,
 verbose=True,
 memory=memory # Add memory here
)

# 7. Run Multi-turn Conversation
print("--- First Turn ---")
response1 = agent_executor.invoke({"input": "What is the capital of Japan?"})
print(response1["output"])

print("\n--- Second Turn ---")
response2 = agent_executor.invoke({"input": "What is its population?"}) # Referring to "its" (Japan's capital)
print(response2["output"])

print("\n--- Third Turn ---")
response3 = agent_executor.invoke({"input": "And what about Brazil's capital?"})
print(response3["output"])

By introducing `ConversationBufferWindowMemory` and including `MessagesPlaceholder(variable_name=”chat_history”)` in the prompt, the agent can now maintain context across multiple turns. The LLM sees the previous messages, allowing it to understand references like “its population” in the second turn. This is critical for building engaging and functional AI agents, especially for use cases like a Building a Customer Service AI Agent.

Advanced Agent Customization and Tool Development

While LangChain provides many built-in tools, you’ll often need to create custom tools to interact with your specific applications, databases, or internal APIs.

Custom tools can be simple functions decorated with `@tool`, as shown with `calculator` and `current_time`. For more complex scenarios, you might define a class that inherits from `BaseTool` or use `StructuredTool` for precise argument definition.


from langchain.tools import BaseTool
from pydantic import BaseModel, Field
from typing import Type

# Define input schema for the custom tool
class GetDatabaseRecordInput(BaseModel):
 table_name: str = Field(description="Name of the database table")
 record_id: int = Field(description="ID of the record to retrieve")

class GetDatabaseRecordTool(BaseTool):
 name = "get_database_record"
 description = "Useful for retrieving a specific record from a database table by its ID."
 args_schema: Type[BaseModel] = GetDatabaseRecordInput

 def _run(self, table_name: str, record_id: int) -> str:
 """Simulates fetching a record from a database."""
 print(f"DEBUG: Fetching record {record_id} from table {table_name}")
 if table_name == "users" and record_id == 1:
 return "User record: {'id': 1, 'name': 'Alice', 'email': '[email protected]'}"
 elif table_name == "products" and record_id == 101:
 return "Product record: {'id': 101, 'name': 'Laptop', 'price': 1200}"
 return f"Record not found for table '{table_name}' with ID '{record_id}'"

 async def _arun(self, table_name: str, record_id: int) -> str:
 """Asynchronous version of the tool (optional)."""
 # Implement asynchronous logic here
 raise NotImplementedError("Asynchronous call not implemented for this tool.")

# Add this tool to your agent's tool list
# tools.append(GetDatabaseRecordTool())

This `GetDatabaseRecordTool` demonstrates how to define a tool with a specific input schema using Pydantic, providing the LLM with clear instructions on how to use it. This level of control is essential for integrating agents into enterprise systems. When comparing frameworks, consider how each handles these custom tool integrations; see Comparing Top 5 AI Agent Frameworks 2026 for more context.

Key Takeaways

  • LangChain agents use an LLM to orchestrate tool usage, enabling them to interact with the external world beyond their training data.
  • The core components are the LLM, Tools, a Prompt, and the Agent Executor.
  • The ReAct pattern (Reasoning and Acting) is a solid approach, often facilitated by prompts from LangChain Hub.
  • `create_openai_functions_agent` is generally recommended for OpenAI models due to its reliable structured output.
  • Memory components (e.g., `ConversationBufferWindowMemory`) are crucial for multi-turn conversations and contextual awareness.
  • Custom tools are easy to create using the `@tool` decorator or by inheriting from `BaseTool` for more complex interactions, allowing agents to integrate with proprietary systems.
  • Always use `verbose=True` during development to inspect the agent’s internal thought process and debug effectively.

Conclusion

LangChain provides a thorough and flexible framework for building AI agents. By understanding its core components—LLMs, tools, prompts, and executors—you can construct agents capable of complex reasoning and interaction. From simple search agents to sophisticated systems with memory and custom integrations, LangChain offers the building blocks to bring your AI agent ideas to fruition. As the field of AI agents continues to evolve, mastering frameworks like LangChain will be increasingly valuable for developing intelligent, autonomous applications.

🕒 Last updated:  ·  Originally published: February 14, 2026

📊
Written by Jake Chen

AI technology analyst covering agent platforms since 2021. Tested 40+ agent frameworks. Regular contributor to AI industry publications.

Learn more →

Leave a Comment

Your email address will not be published. Required fields are marked *

Browse Topics: Advanced AI Agents | Advanced Techniques | AI Agent Basics | AI Agent Tools | AI Agent Tutorials

Related Sites

ClawseoAgntmaxBot-1Agent101
Scroll to Top