🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Tutorials/Build a Customer Support Agent (LangGraph)
advanced22 min read

Build a Customer Support Agent (LangGraph)

Build a production customer support AI agent with LangGraph — triage agent to classify tickets, knowledge base RAG for FAQ retrieval, response generator, human escalation logic, and Zendesk integration. Complete LangGraph implementation with full Python code.

black flat screen computer monitor on brown wooden table
Photo by ThisisEngineering on Unsplash
By AI Agents Guide Team•March 1, 2026

Table of Contents

  1. What You'll Build
  2. Prerequisites
  3. Overview
  4. Step 1: Define the Graph State
  5. Step 2: Triage Node
  6. Step 3: Knowledge Base RAG Node
  7. Step 4: Account Lookup and Response Generation
  8. Step 5: Escalation Node and Graph Assembly
  9. Step 6: FastAPI Webhook Integration
  10. Common Issues and Solutions
  11. Production Considerations
  12. Next Steps
AI visualization for multi-agent customer support system
Photo by Growtika on Unsplash

What You'll Build#

A production customer support agent system with:

  • Triage node: Classifies ticket category and priority using structured LLM output
  • Knowledge base RAG node: Retrieves relevant FAQ and documentation chunks
  • Response generator node: Writes a complete, personalized response
  • Human escalation node: Flags tickets for human review with full context
  • Zendesk integration: Webhook receiver and ticket response poster
  • LangGraph state management: Persistent conversation state across all nodes

The system handles the full support lifecycle from ticket intake to resolution.

Prerequisites#

pip install langgraph langchain langchain-openai langchain-community \
    chromadb fastapi uvicorn httpx pydantic python-dotenv
  • Python 3.11+
  • OpenAI API key
  • Basic familiarity with agentic workflows and agent state

Overview#

The support agent is a LangGraph graph with five nodes connected by conditional edges:

[START] → [triage] → {billing|technical|general} → [retrieve_kb] → [generate_response]
                    ↘ [escalate] → [END]

State flows through the graph, accumulating ticket context, retrieved documents, and the generated response at each step.

Step 1: Define the Graph State#

# support_agent/state.py
from langgraph.graph import MessagesState
from typing import Annotated, Literal, Optional
from pydantic import BaseModel, Field
import operator


class TicketClassification(BaseModel):
    """Output from the triage node."""
    category: Literal["billing", "technical", "account", "general"] = Field(
        description="Primary category of the support request"
    )
    priority: Literal["low", "medium", "high", "urgent"] = Field(
        description="Priority based on business impact and urgency"
    )
    summary: str = Field(
        description="One-sentence summary of the customer's issue",
        max_length=200
    )
    requires_account_lookup: bool = Field(
        description="True if resolving this requires customer account data"
    )
    requires_human: bool = Field(
        description="True if this should immediately go to a human agent"
    )
    routing_reason: str = Field(
        description="Brief explanation of why this category was chosen"
    )


class SupportAgentState(MessagesState):
    """Complete state for the support agent graph."""
    # Ticket information
    ticket_id: str
    customer_email: str
    customer_name: Optional[str] = None
    customer_plan: Optional[str] = None   # e.g., "Pro", "Enterprise"
    raw_ticket_text: str

    # Triage results
    classification: Optional[TicketClassification] = None

    # Knowledge base retrieval
    retrieved_docs: Annotated[list[str], operator.add] = []  # Accumulated across calls
    retrieval_queries: list[str] = []

    # Account data
    account_data: Optional[dict] = None

    # Response
    draft_response: Optional[str] = None
    final_response: Optional[str] = None

    # Escalation
    escalation_ticket_id: Optional[str] = None
    escalation_notes: Optional[str] = None

    # Tracking
    nodes_visited: Annotated[list[str], operator.add] = []

Step 2: Triage Node#

# support_agent/nodes/triage.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from ..state import SupportAgentState, TicketClassification

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
structured_llm = llm.with_structured_output(TicketClassification)

TRIAGE_PROMPT = ChatPromptTemplate.from_messages([
    ("system", """You are a support ticket triage specialist.
Classify incoming support tickets accurately.

Category definitions:
- billing: Payment issues, invoices, subscription changes, refunds, charges
- technical: Bugs, errors, product not working, API issues, data problems
- account: Login, password, permissions, team management, SSO
- general: Feature questions, how-to, pricing inquiry, general feedback

Priority rules:
- urgent: Production outage, data loss, security breach, cannot access at all
- high: Major feature broken, multiple users affected, paying customer blocked
- medium: Single user issue, workaround available, non-critical feature
- low: General questions, feature requests, minor UI issues

Requires human = true when: customer explicitly asks, billing dispute > $200,
security concern, 3rd failed resolution attempt mentioned.

Examples of correct classification:
- "I was charged twice" → billing/high/requires_account_lookup=True
- "Getting 500 error on /api/export" → technical/medium
- "How do I add team members?" → general/low
- "My entire team is locked out" → account/urgent/requires_human=True
"""),
    ("human", """Classify this support ticket:

Customer Email: {customer_email}
Customer Plan: {customer_plan}
Ticket Text: {ticket_text}
"""),
])


async def triage_node(state: SupportAgentState) -> dict:
    """Classify the incoming ticket and set routing."""
    classification = await structured_llm.ainvoke(
        TRIAGE_PROMPT.format_messages(
            customer_email=state["customer_email"],
            customer_plan=state.get("customer_plan", "Unknown"),
            ticket_text=state["raw_ticket_text"],
        )
    )

    return {
        "classification": classification,
        "nodes_visited": ["triage"],
    }

Step 3: Knowledge Base RAG Node#

# support_agent/nodes/retrieve.py
import chromadb
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from ..state import SupportAgentState

# Initialize vector store
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")

# In production: load from persistent Chroma or Pinecone
def get_vectorstore() -> Chroma:
    """Get or create the knowledge base vector store."""
    client = chromadb.PersistentClient(path="./chroma_db")
    return Chroma(
        client=client,
        collection_name="support_kb",
        embedding_function=embeddings,
    )


async def populate_kb_with_sample_docs():
    """Populate KB with sample FAQ documents."""
    vectorstore = get_vectorstore()

    docs = [
        Document(
            page_content="Refunds are available within 30 days of purchase. To request a refund, email billing@acme.com with your invoice number. Refunds are processed within 5-7 business days.",
            metadata={"category": "billing", "topic": "refunds"},
        ),
        Document(
            page_content="If you see a 500 error on the API, first check our status page at status.acme.com. Common causes: rate limit exceeded, invalid API key, or temporary service issue. Check your API key has the required permissions.",
            metadata={"category": "technical", "topic": "api_errors"},
        ),
        Document(
            page_content="To add team members: go to Settings > Team > Invite Members. Enter their email and select a role (Admin, Member, or Viewer). They'll receive an invitation email. Pro plan: up to 25 members. Enterprise: unlimited.",
            metadata={"category": "account", "topic": "team_management"},
        ),
        Document(
            page_content="Password reset: click 'Forgot Password' on the login page. Enter your email. Check your inbox for the reset link (expires in 24 hours). If you use SSO, contact your administrator instead.",
            metadata={"category": "account", "topic": "password_reset"},
        ),
    ]

    vectorstore.add_documents(docs)
    return vectorstore


async def retrieve_kb_node(state: SupportAgentState) -> dict:
    """Retrieve relevant knowledge base documents."""
    classification = state["classification"]
    ticket_text = state["raw_ticket_text"]

    vectorstore = get_vectorstore()

    # Build targeted queries based on classification
    queries = [ticket_text[:500]]  # Primary query from ticket

    # Add category-specific query
    if classification:
        queries.append(
            f"{classification.category} {classification.summary}"
        )

    # Retrieve and deduplicate
    all_docs = []
    seen_content = set()

    for query in queries[:3]:  # Limit to 3 queries
        results = vectorstore.similarity_search(
            query,
            k=3,
            filter={"category": classification.category} if classification else None,
        )
        for doc in results:
            if doc.page_content not in seen_content:
                all_docs.append(doc.page_content)
                seen_content.add(doc.page_content)

    return {
        "retrieved_docs": all_docs,
        "retrieval_queries": queries,
        "nodes_visited": ["retrieve_kb"],
    }

Step 4: Account Lookup and Response Generation#

# support_agent/nodes/account.py
import json
from ..state import SupportAgentState

async def lookup_account_node(state: SupportAgentState) -> dict:
    """Look up customer account data from CRM/database."""
    email = state["customer_email"]

    # In production: query your CRM or database
    mock_accounts = {
        "customer@example.com": {
            "name": "Alice Johnson",
            "plan": "Pro",
            "status": "active",
            "billing_date": "2026-04-01",
            "open_tickets": 2,
            "account_age_days": 342,
            "mrr": 99.00,
        }
    }

    account_data = mock_accounts.get(email.lower())
    return {
        "account_data": account_data,
        "customer_name": account_data["name"] if account_data else None,
        "customer_plan": account_data["plan"] if account_data else "Unknown",
        "nodes_visited": ["account_lookup"],
    }
# support_agent/nodes/respond.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from ..state import SupportAgentState

llm = ChatOpenAI(model="gpt-4o", temperature=0.3)

RESPONSE_PROMPT = ChatPromptTemplate.from_messages([
    ("system", """You are a customer support agent for Acme SaaS.
Write helpful, clear, empathetic responses to customer tickets.

Guidelines:
- Address the customer by name if available
- Acknowledge their issue in the first sentence
- Provide the solution or next steps clearly
- Use numbered steps for multi-step processes
- Keep responses under 300 words
- End with an offer to help further
- Sign off as "Acme Support Team"

Formatting:
- Use plain text, no markdown (this goes into email)
- Short paragraphs for readability
- Bold key actions if needed
"""),
    ("human", """Generate a support response for this ticket:

**Customer:** {customer_name} ({customer_email}) — Plan: {customer_plan}
**Category:** {category} | **Priority:** {priority}
**Issue Summary:** {summary}

**Customer's Message:**
{ticket_text}

**Relevant Knowledge Base Articles:**
{kb_content}

**Account Information:**
{account_info}
"""),
])


async def generate_response_node(state: SupportAgentState) -> dict:
    """Generate the final support response."""
    classification = state["classification"]
    account_data = state.get("account_data", {})

    kb_content = "\n\n---\n\n".join(state["retrieved_docs"]) if state["retrieved_docs"] else "No relevant articles found."
    account_info = json.dumps(account_data, indent=2) if account_data else "Account not found."

    response = await llm.ainvoke(
        RESPONSE_PROMPT.format_messages(
            customer_name=state.get("customer_name", "there"),
            customer_email=state["customer_email"],
            customer_plan=state.get("customer_plan", "Unknown"),
            category=classification.category if classification else "general",
            priority=classification.priority if classification else "medium",
            summary=classification.summary if classification else state["raw_ticket_text"][:100],
            ticket_text=state["raw_ticket_text"],
            kb_content=kb_content,
            account_info=account_info,
        )
    )

    return {
        "final_response": response.content,
        "nodes_visited": ["generate_response"],
    }

Step 5: Escalation Node and Graph Assembly#

# support_agent/nodes/escalate.py
import httpx
import json
from ..state import SupportAgentState


async def escalate_node(state: SupportAgentState) -> dict:
    """Create human escalation ticket in Zendesk."""
    classification = state["classification"]

    # Build context summary for human agent
    escalation_notes = f"""
AUTOMATED TRIAGE SUMMARY
========================
Category: {classification.category if classification else 'Unknown'}
Priority: {classification.priority if classification else 'Unknown'}
AI Assessment: {classification.summary if classification else 'N/A'}
Reason for Escalation: {classification.routing_reason if classification and classification.requires_human else 'Escalation requested'}

CUSTOMER DETAILS
Customer: {state.get('customer_name', 'Unknown')} ({state['customer_email']})
Plan: {state.get('customer_plan', 'Unknown')}

ORIGINAL MESSAGE
{state['raw_ticket_text']}

RETRIEVED KNOWLEDGE BASE ARTICLES
{chr(10).join(state['retrieved_docs'][:2]) if state['retrieved_docs'] else 'None retrieved'}

AGENT NOTES
The AI agent was unable to resolve this ticket and has escalated it for human review.
Please review the above context before responding to the customer.
"""

    # Post to Zendesk
    zendesk_ticket_id = await create_zendesk_ticket(
        customer_email=state["customer_email"],
        subject=f"[{classification.priority.upper() if classification else 'MEDIUM'}] {classification.summary[:100] if classification else 'Support Request'}",
        comment=escalation_notes,
        priority=classification.priority if classification else "medium",
        tags=["ai-escalated", classification.category if classification else "general"],
    )

    return {
        "escalation_ticket_id": zendesk_ticket_id,
        "escalation_notes": escalation_notes,
        "nodes_visited": ["escalate"],
    }


async def create_zendesk_ticket(
    customer_email: str,
    subject: str,
    comment: str,
    priority: str = "normal",
    tags: list[str] = None,
) -> str:
    """Create a Zendesk ticket via API."""
    import os
    zendesk_subdomain = os.environ["ZENDESK_SUBDOMAIN"]
    zendesk_token = os.environ["ZENDESK_API_TOKEN"]
    zendesk_email = os.environ["ZENDESK_EMAIL"]

    payload = {
        "ticket": {
            "subject": subject,
            "comment": {"body": comment},
            "requester": {"email": customer_email},
            "priority": priority,
            "tags": tags or [],
        }
    }

    async with httpx.AsyncClient() as client:
        response = await client.post(
            f"https://{zendesk_subdomain}.zendesk.com/api/v2/tickets",
            json=payload,
            auth=(f"{zendesk_email}/token", zendesk_token),
            timeout=15.0,
        )
        response.raise_for_status()
        return str(response.json()["ticket"]["id"])
# support_agent/graph.py
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from .state import SupportAgentState
from .nodes.triage import triage_node
from .nodes.retrieve import retrieve_kb_node
from .nodes.account import lookup_account_node
from .nodes.respond import generate_response_node
from .nodes.escalate import escalate_node


def should_escalate(state: SupportAgentState) -> str:
    """Routing function: escalate or continue to resolution."""
    classification = state.get("classification")
    if classification and classification.requires_human:
        return "escalate"
    return "retrieve_kb"


def needs_account_lookup(state: SupportAgentState) -> str:
    """Routing function: lookup account or skip."""
    classification = state.get("classification")
    if classification and classification.requires_account_lookup:
        return "account_lookup"
    return "generate_response"


def build_support_graph() -> StateGraph:
    """Build the complete support agent graph."""
    workflow = StateGraph(SupportAgentState)

    # Add all nodes
    workflow.add_node("triage", triage_node)
    workflow.add_node("retrieve_kb", retrieve_kb_node)
    workflow.add_node("account_lookup", lookup_account_node)
    workflow.add_node("generate_response", generate_response_node)
    workflow.add_node("escalate", escalate_node)

    # Connect START to triage
    workflow.add_edge(START, "triage")

    # After triage: escalate or retrieve KB
    workflow.add_conditional_edges(
        "triage",
        should_escalate,
        {"escalate": "escalate", "retrieve_kb": "retrieve_kb"},
    )

    # After KB retrieval: check if account lookup needed
    workflow.add_conditional_edges(
        "retrieve_kb",
        needs_account_lookup,
        {"account_lookup": "account_lookup", "generate_response": "generate_response"},
    )

    # After account lookup: always generate response
    workflow.add_edge("account_lookup", "generate_response")

    # Terminal nodes go to END
    workflow.add_edge("generate_response", END)
    workflow.add_edge("escalate", END)

    return workflow


# Compile with memory for multi-turn conversations
memory = MemorySaver()
support_graph = build_support_graph().compile(checkpointer=memory)

Step 6: FastAPI Webhook Integration#

# support_agent/api.py
from fastapi import FastAPI, Request, HTTPException, BackgroundTasks
from pydantic import BaseModel
import asyncio
from .graph import support_graph
from .state import SupportAgentState

app = FastAPI(title="Support Agent API")


class TicketWebhookPayload(BaseModel):
    ticket_id: str
    customer_email: str
    customer_name: str | None = None
    subject: str
    body: str
    plan: str | None = None


@app.post("/webhook/zendesk")
async def zendesk_webhook(
    payload: TicketWebhookPayload,
    background_tasks: BackgroundTasks,
):
    """Receive Zendesk ticket and process asynchronously."""
    background_tasks.add_task(process_ticket, payload)
    return {"status": "processing", "ticket_id": payload.ticket_id}


async def process_ticket(payload: TicketWebhookPayload) -> dict:
    """Process a support ticket through the agent graph."""
    initial_state = {
        "messages": [],
        "ticket_id": payload.ticket_id,
        "customer_email": payload.customer_email,
        "customer_name": payload.customer_name,
        "customer_plan": payload.plan,
        "raw_ticket_text": f"Subject: {payload.subject}\n\n{payload.body}",
        "retrieved_docs": [],
        "retrieval_queries": [],
        "nodes_visited": [],
    }

    config = {"configurable": {"thread_id": payload.ticket_id}}

    result = await support_graph.ainvoke(initial_state, config=config)

    # Post response back to Zendesk
    if result.get("final_response"):
        await post_zendesk_response(
            ticket_id=payload.ticket_id,
            response=result["final_response"],
        )
    elif result.get("escalation_ticket_id"):
        # Already escalated via escalate_node
        print(f"Ticket {payload.ticket_id} escalated to {result['escalation_ticket_id']}")

    return result


async def post_zendesk_response(ticket_id: str, response: str) -> None:
    """Post agent response as Zendesk ticket comment."""
    import os, httpx
    zendesk_subdomain = os.environ.get("ZENDESK_SUBDOMAIN", "demo")
    async with httpx.AsyncClient() as client:
        await client.put(
            f"https://{zendesk_subdomain}.zendesk.com/api/v2/tickets/{ticket_id}",
            json={"ticket": {
                "comment": {"body": response, "public": True},
                "status": "solved",
            }},
            auth=(
                f"{os.environ['ZENDESK_EMAIL']}/token",
                os.environ["ZENDESK_API_TOKEN"]
            ),
        )

Common Issues and Solutions#

Issue: Triage classification is wrong for edge cases

Add explicit examples to the triage prompt for your most common misclassified cases. Review triage classifications daily from LangFuse traces and update the few-shot examples. Consider adding a confidence score to the classification and routing low-confidence tickets directly to human agents.

Issue: KB retrieval returns irrelevant documents

Tune the similarity threshold in your vector store query. For Chroma, add a score_threshold=0.7 parameter to filter out low-relevance documents. Improve your KB document structure: shorter, more focused chunks (300-500 tokens) with clear topic labels in metadata.

Issue: Graph gets stuck in a loop

LangGraph graphs with memory can re-enter nodes if state is not updated correctly. Ensure every node returns a state update with at least one field changed. Set recursion_limit=10 in the compilation config to prevent infinite loops.

Production Considerations#

State persistence: For multi-day conversations, switch from MemorySaver to PostgresSaver or RedisSaver. This lets you resume conversations across process restarts and handle webhooks that arrive hours apart.

Human-in-the-loop review: Add a checkpoint before the generate_response node for high-priority tickets. The agent pauses, a human reviews the draft, approves or edits it, then the graph resumes. See human-in-the-loop patterns.

Quality monitoring: Log classification accuracy, KB retrieval relevance (via LLM judge), and response quality scores. Track escalation rate — if it exceeds 20%, your KB is missing content or the triage model needs retraining.

Next Steps#

  • Add human-in-the-loop approval for high-stakes responses
  • Implement agent caching for KB retrieval
  • Set up monitoring for production quality tracking
  • Build a research agent using similar patterns
  • Review the LangFuse observability tutorial

Related Tutorials

How to Create a Meeting Scheduling AI Agent

Build an autonomous AI agent to handle meeting scheduling, calendar checks, and bookings intelligently. This step-by-step tutorial covers Python implementation with LangChain, Google Calendar integration, and advanced features like conflict resolution for efficient automation.

How to Manage Multiple AI Agents

Master managing multiple AI agents with this in-depth tutorial. Learn orchestration, state sharing, parallel execution, and scaling using LangGraph and custom tools. From basics to production-ready swarms for complex tasks.

How to Train an AI Agent on Your Own Data

Master training AI agents on custom data with three methods: context stuffing, RAG using vector databases, and fine-tuning. This beginner-to-advanced guide includes step-by-step code examples, pitfalls, and best practices to build knowledgeable agents for your specific needs.

← Back to All Tutorials