🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Tutorials/LangChain Agent with Custom Tools (2026)
intermediate16 min read

LangChain Agent with Custom Tools (2026)

Build a production LangChain ReAct agent with custom tools — creating Tool objects from functions, StructuredTool with Pydantic schemas, tool error handling, and a complete customer support agent example with testing. Full Python code included.

Tech abstract representing LangChain agent architecture
Photo by Markus Spiske on Unsplash
By AI Agents Guide Team•March 1, 2026

Table of Contents

  1. What You'll Build
  2. Prerequisites
  3. Overview
  4. Step 1: Creating Tools from Functions
  5. Step 2: StructuredTool with Pydantic Schemas
  6. Step 3: Tool Error Handling
  7. Step 4: Building the ReAct Agent
  8. Step 5: Testing Your Agent and Tools
  9. Common Issues and Solutions
  10. Production Considerations
  11. Next Steps
Creative technology visualization for agent tools
Photo by Pankaj Patel on Unsplash

What You'll Build#

A production customer support agent that:

  • Classifies incoming support tickets using a custom classifier tool
  • Searches a knowledge base via a StructuredTool with Pydantic schema
  • Looks up account data from a simulated database
  • Escalates to human agents using a typed escalation tool
  • Handles all tool errors gracefully without crashing

This tutorial uses LangChain's ReAct agent architecture — the most battle-tested pattern for tool-using agents.

Prerequisites#

pip install langchain langchain-openai langchain-community pydantic python-dotenv pytest
  • Python 3.11+
  • OpenAI API key in .env as OPENAI_API_KEY
  • Familiarity with tool calling concepts

Overview#

LangChain's tool ecosystem centers on the BaseTool abstraction. Every tool has a name, description, and either a _run method or a wrapped function. The description is critical — the LLM reads it to decide when and how to call the tool.

The ReAct agent (Reason + Act) iterates through: observe the user's request → reason about what tool to call → call the tool → observe the result → reason again → repeat until done.

Step 1: Creating Tools from Functions#

The @tool decorator is the fastest way to create a tool from an existing function:

from langchain.tools import tool
from typing import Optional

@tool
def search_knowledge_base(query: str) -> str:
    """Search the support knowledge base for answers to customer questions.

    Use this tool when a customer asks about product features, pricing,
    troubleshooting steps, or company policies. The query should be a
    natural language question or keywords.

    Returns a list of relevant articles with their titles and content.
    """
    # Simulated knowledge base — replace with your vector store
    kb = {
        "refund policy": "Refunds are available within 30 days of purchase for unused subscriptions.",
        "billing cycle": "Billing occurs on the 1st of each month. Pro-rated charges apply for upgrades.",
        "api rate limits": "Free plan: 100 requests/day. Pro plan: 10,000 requests/day. Enterprise: unlimited.",
        "password reset": "Click 'Forgot Password' on the login page. Reset link expires in 24 hours.",
    }

    results = []
    query_lower = query.lower()
    for topic, content in kb.items():
        if any(word in query_lower for word in topic.split()):
            results.append(f"**{topic.title()}**: {content}")

    return "\n\n".join(results) if results else "No relevant articles found for this query."

Note: The docstring IS the tool description. Write it for the LLM, not for human developers.

Step 2: StructuredTool with Pydantic Schemas#

For tools with multiple parameters or complex validation, use StructuredTool:

from langchain.tools import StructuredTool
from pydantic import BaseModel, Field, field_validator
from typing import Literal
import json


class AccountLookupInput(BaseModel):
    """Input for looking up customer account information."""
    identifier: str = Field(
        description="Customer email address or account ID (format: ACC-XXXXX)"
    )
    fields: list[str] = Field(
        default=["plan", "status", "billing_date"],
        description="List of fields to retrieve. Options: plan, status, billing_date, usage, tickets"
    )

    @field_validator("identifier")
    @classmethod
    def validate_identifier(cls, v: str) -> str:
        v = v.strip().lower()
        if not ("@" in v or v.startswith("acc-")):
            raise ValueError("identifier must be an email or account ID starting with ACC-")
        return v


def lookup_account(identifier: str, fields: list[str]) -> str:
    """Internal function — not exposed directly."""
    # Simulated database
    accounts = {
        "customer@example.com": {
            "plan": "Pro",
            "status": "active",
            "billing_date": "2026-04-01",
            "usage": "7,432 API calls this month",
            "tickets": ["TICKET-001: Resolved 2026-02-15", "TICKET-002: Open"],
        }
    }

    account = accounts.get(identifier)
    if not account:
        return f"No account found for '{identifier}'. Verify the email or account ID."

    result = {field: account.get(field, "N/A") for field in fields}
    return json.dumps(result, indent=2)


account_tool = StructuredTool.from_function(
    func=lookup_account,
    name="lookup_account",
    description=(
        "Look up customer account information by email or account ID. "
        "Use this when a customer asks about their plan, billing, usage, or ticket history. "
        "Always look up the account before making recommendations about plan changes."
    ),
    args_schema=AccountLookupInput,
    handle_tool_error=True,  # Return errors as strings instead of raising
)


class EscalationInput(BaseModel):
    customer_email: str = Field(description="Customer's email address")
    priority: Literal["low", "medium", "high", "urgent"] = Field(
        description="Ticket priority. Use 'urgent' for billing issues or SLA violations."
    )
    category: Literal["billing", "technical", "account", "general"] = Field(
        description="Issue category for routing"
    )
    summary: str = Field(
        description="Clear 1-2 sentence summary of the issue for the human agent",
        max_length=500
    )
    context: str = Field(
        description="Full context including what was already tried",
        max_length=2000
    )


def escalate_to_human(
    customer_email: str,
    priority: str,
    category: str,
    summary: str,
    context: str
) -> str:
    """Create an escalation ticket and return the ticket ID."""
    import random
    ticket_id = f"ESC-{random.randint(10000, 99999)}"

    # In production: call your ticketing system API here
    print(f"[ESCALATION] {ticket_id}: {priority} priority {category} ticket for {customer_email}")
    print(f"Summary: {summary}")

    return (
        f"Escalation ticket {ticket_id} created successfully. "
        f"A human agent will contact {customer_email} within "
        f"{'1 hour' if priority == 'urgent' else '24 hours'}."
    )


escalation_tool = StructuredTool.from_function(
    func=escalate_to_human,
    name="escalate_to_human",
    description=(
        "Create a support ticket for human agent review. Use this when: "
        "(1) the customer has asked for a human, "
        "(2) the issue involves billing disputes over $100, "
        "(3) you've attempted 2+ solutions without resolution, "
        "(4) the issue is a security concern. "
        "Do NOT escalate for questions answerable by the knowledge base."
    ),
    args_schema=EscalationInput,
    handle_tool_error=True,
)

Step 3: Tool Error Handling#

LangChain's handle_tool_error parameter accepts three forms:

from langchain.tools import StructuredTool

# Option 1: Boolean — return the exception message as a string
tool_with_bool = StructuredTool.from_function(
    func=my_function,
    handle_tool_error=True,
)

# Option 2: Static string — return this on any error
tool_with_string = StructuredTool.from_function(
    func=my_function,
    handle_tool_error="Tool temporarily unavailable. Try again or use an alternative approach.",
)

# Option 3: Callable — custom error formatting
def format_tool_error(error: Exception) -> str:
    if "timeout" in str(error).lower():
        return "Request timed out. The service may be slow. Try again with a simpler query."
    if "rate limit" in str(error).lower():
        return "Rate limit reached. Wait 30 seconds before retrying."
    if "not found" in str(error).lower():
        return f"Resource not found: {error}. Verify the input data is correct."
    return f"Tool error: {str(error)}. Consider an alternative approach."

tool_with_handler = StructuredTool.from_function(
    func=my_function,
    handle_tool_error=format_tool_error,
)

For defensive error handling inside tool functions:

from langchain.tools import BaseTool
from pydantic import BaseModel

class RobustSearchTool(BaseTool):
    name: str = "robust_search"
    description: str = "Search with automatic retry and graceful degradation."
    args_schema: type[BaseModel] = SearchInput

    def _run(self, query: str, **kwargs) -> str:
        """Always returns a string — never raises."""
        try:
            result = self._execute_search(query)
            if not result:
                return f"No results found for '{query}'. Try broader search terms."
            return result
        except TimeoutError:
            return "Search timed out. Try a shorter, more specific query."
        except ConnectionError:
            return "Search service unavailable. Use knowledge base instead."
        except Exception as e:
            # Log the real error, return safe message to agent
            print(f"[ERROR] Search tool failed: {e}")
            return "Search failed unexpectedly. Proceed with available information."

    def _execute_search(self, query: str) -> str:
        # actual implementation
        raise NotImplementedError

Step 4: Building the ReAct Agent#

from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain import hub
from langchain.memory import ConversationSummaryMemory
from langchain_core.prompts import ChatPromptTemplate


def build_support_agent() -> AgentExecutor:
    """Build the customer support agent with all tools."""
    llm = ChatOpenAI(
        model="gpt-4o",
        temperature=0,  # Deterministic for support tasks
    )

    tools = [
        search_knowledge_base,  # @tool decorated function
        account_tool,           # StructuredTool
        escalation_tool,        # StructuredTool
    ]

    # Custom prompt with support-specific instructions
    prompt = ChatPromptTemplate.from_messages([
        ("system", """You are a customer support agent for Acme SaaS.
Your goal: resolve customer issues completely on first contact.

Guidelines:
- Always search the knowledge base before answering policy questions
- Always look up the customer account before discussing billing or plan details
- Escalate when: customer requests human, billing dispute > $100, 2+ failed attempts
- Be concise: customers want solutions, not explanations of what you're doing
- If you escalate, include everything the human agent will need

{tools}

Use this format:
Question: the input question you must answer
Thought: your reasoning
Action: the action to take — must be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (repeat as needed)
Thought: I now have enough information
Final Answer: the complete response to the customer
"""),
        ("human", "{input}"),
        ("assistant", "{agent_scratchpad}"),
    ])

    # Conversation memory (summarizes old context)
    memory = ConversationSummaryMemory(
        llm=llm,
        memory_key="chat_history",
        return_messages=True,
        max_token_limit=1000,
    )

    agent = create_react_agent(llm, tools, prompt)

    return AgentExecutor(
        agent=agent,
        tools=tools,
        memory=memory,
        verbose=True,
        max_iterations=6,           # Prevent runaway loops
        max_execution_time=60,      # 60 second timeout
        handle_parsing_errors=True, # Don't crash on malformed outputs
        return_intermediate_steps=True,  # For tracing/debugging
    )

Step 5: Testing Your Agent and Tools#

Test tools in isolation before testing the full agent:

import pytest
from unittest.mock import patch, MagicMock


# --- Unit Tests for Individual Tools ---

class TestAccountLookupTool:

    def test_valid_email_lookup(self):
        result = lookup_account("customer@example.com", ["plan", "status"])
        data = json.loads(result)
        assert data["plan"] == "Pro"
        assert data["status"] == "active"

    def test_unknown_email_returns_not_found(self):
        result = lookup_account("unknown@test.com", ["plan"])
        assert "No account found" in result

    def test_invalid_identifier_raises_validation_error(self):
        from pydantic import ValidationError
        with pytest.raises(ValidationError):
            AccountLookupInput(identifier="not-an-email-or-id", fields=["plan"])

    def test_fields_subset_works(self):
        result = lookup_account("customer@example.com", ["plan"])
        data = json.loads(result)
        assert "plan" in data
        assert "status" not in data  # Only requested fields


class TestKnowledgeBaseTool:

    def test_finds_refund_policy(self):
        result = search_knowledge_base.invoke({"query": "refund policy"})
        assert "30 days" in result.lower()

    def test_returns_not_found_for_unknown_topics(self):
        result = search_knowledge_base.invoke({"query": "quantum computing support"})
        assert "No relevant articles found" in result


# --- Integration Test for Full Agent ---

class TestSupportAgent:

    @pytest.fixture
    def agent(self):
        return build_support_agent()

    def test_resolves_refund_question(self, agent):
        result = agent.invoke({"input": "What is your refund policy?"})
        assert "30 days" in result["output"].lower()
        # Verify knowledge base was searched
        tool_names = [s[0].tool for s in result["intermediate_steps"]]
        assert "search_knowledge_base" in tool_names

    def test_looks_up_account_for_billing_question(self, agent):
        result = agent.invoke({
            "input": "I need to check my billing date. My email is customer@example.com"
        })
        tool_names = [s[0].tool for s in result["intermediate_steps"]]
        assert "lookup_account" in tool_names

    def test_escalates_when_requested(self, agent):
        result = agent.invoke({
            "input": "I want to speak to a human agent about a billing issue. "
                     "My email is customer@example.com and I was charged twice."
        })
        tool_names = [s[0].tool for s in result["intermediate_steps"]]
        assert "escalate_to_human" in tool_names
        assert "ESC-" in result["output"]  # Ticket ID returned

Common Issues and Solutions#

Issue: Agent produces malformed Action/Action Input and crashes

Set handle_parsing_errors=True on AgentExecutor. For persistent issues, add explicit formatting examples to your prompt or switch to the structured chat agent (create_structured_chat_agent) which uses JSON for tool calls.

Issue: Agent calls the same tool in a loop

Add explicit stopping conditions: "If a tool returns the same result twice in a row, stop and report what you found." Also set max_iterations to a reasonable limit (5-8 for most support tasks).

Issue: Tool descriptions are too long and eat context

Trim tool descriptions to under 150 words each. The most important information is when to use the tool and what it returns — not implementation details.

Production Considerations#

Tracing: Integrate LangFuse or LangSmith for end-to-end agent tracing. Both capture tool calls, latencies, and token usage per run.

Streaming: Use AgentExecutor.astream_events() for real-time streaming to your frontend — customers see partial responses rather than waiting for the full agent loop.

Async: For high-concurrency support applications, use AgentExecutor.ainvoke() instead of invoke() and run inside an asyncio event loop.

Next Steps#

  • Explore agent state for managing conversation context
  • Add human-in-the-loop approval for sensitive actions
  • Set up a full testing pipeline for your agent
  • Review LangChain in our directory for ecosystem tools
  • Build a complete customer support agent with LangGraph

Related Tutorials

How to Create a Meeting Scheduling AI Agent

Build an autonomous AI agent to handle meeting scheduling, calendar checks, and bookings intelligently. This step-by-step tutorial covers Python implementation with LangChain, Google Calendar integration, and advanced features like conflict resolution for efficient automation.

How to Manage Multiple AI Agents

Master managing multiple AI agents with this in-depth tutorial. Learn orchestration, state sharing, parallel execution, and scaling using LangGraph and custom tools. From basics to production-ready swarms for complex tasks.

How to Train an AI Agent on Your Own Data

Master training AI agents on custom data with three methods: context stuffing, RAG using vector databases, and fine-tuning. This beginner-to-advanced guide includes step-by-step code examples, pitfalls, and best practices to build knowledgeable agents for your specific needs.

← Back to All Tutorials