What You'll Build#
A production n8n AI agent workflow that:
- Triggers on incoming webhook (support ticket, form submission, etc.)
- Uses the AI Agent node with GPT-4o or Claude
- Calls custom tools: HTTP Request (external API), Code (JavaScript logic), database query
- Handles errors with fallback paths
- Responds asynchronously for long-running tasks
- Deploys on self-hosted n8n with queue mode for production scale
Prerequisites#
- n8n self-hosted (v1.30+) or n8n Cloud account
- OpenAI or Anthropic API key
- Basic familiarity with n8n's workflow editor
- Docker installed (for self-hosted setup)
Overview#
n8n's AI Agent node wraps LangChain agents in a visual interface. Under the hood, it runs a ReAct agent loop — the model reasons, picks a tool, observes the result, and repeats. This means concepts from agentic workflows and tool calling apply directly to n8n agents.
The advantage of n8n for agents: you get all the integration nodes (HTTP, databases, APIs, SaaS) as ready-made tools without writing connector code.
Step 1: Self-Hosted n8n Setup#
For production, self-host n8n with queue mode enabled:
# docker-compose.yml
version: "3.8"
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- "5678:5678"
environment:
- N8N_HOST=your-domain.com
- N8N_PROTOCOL=https
- N8N_PORT=5678
- WEBHOOK_URL=https://your-domain.com/
- EXECUTIONS_MODE=queue # Enable queue mode for long-running jobs
- QUEUE_BULL_REDIS_HOST=redis
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- EXECUTIONS_TIMEOUT=600 # 10 minute timeout for agent workflows
- EXECUTIONS_TIMEOUT_MAX=1800 # 30 minute absolute max
volumes:
- n8n_data:/home/node/.n8n
n8n-worker:
image: n8nio/n8n:latest
command: worker
restart: always
environment:
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
scale: 3 # Run 3 workers for parallel execution
redis:
image: redis:7-alpine
restart: always
volumes:
- redis_data:/data
postgres:
image: postgres:15-alpine
restart: always
environment:
- POSTGRES_DB=n8n
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
n8n_data:
redis_data:
postgres_data:
# Start n8n with queue mode
docker compose up -d
# Check worker status
docker compose logs n8n-worker
Step 2: Configuring the AI Agent Node#
In the n8n editor, add the AI Agent node. Configure it as follows:
Agent Settings:
- Agent: ReAct Agent (recommended for tool use)
- Prompt Type: Define Below (for custom system prompts)
- System Message: Your agent's role and instructions
Example system message for a support agent:
You are a customer support agent for Acme SaaS.
Your goal is to resolve customer issues efficiently.
You have access to tools that let you:
- Search the knowledge base for FAQ answers
- Look up customer account details by email
- Create support tickets for complex issues
- Send notification emails
Guidelines:
- Always search the knowledge base before answering policy questions
- Look up the customer account when they ask about billing or their plan
- Create a ticket if you cannot resolve the issue after 2 attempts
- Be concise — customers want answers, not explanations
Model Configuration:
Under "Chat Model," connect an OpenAI or Anthropic credential:
- OpenAI Chat Model node → GPT-4o, temperature: 0
- Anthropic Chat Model node → Claude 3.5 Sonnet, temperature: 0
Memory (optional):
For multi-turn conversations, add a Window Buffer Memory node:
- Window Size: 10 (last 10 messages)
- For production: use Redis-backed memory with the Redis Chat Memory node
Step 3: Adding Tool Nodes#
Connect tool nodes to the AI Agent node's "Tools" input. Each tool node type serves a different purpose.
Tool 1: HTTP Request Tool (external API)
Add a Tool: HTTP Request node:
{
"name": "search_knowledge_base",
"description": "Search the support knowledge base. Use for policy questions, how-to guides, and troubleshooting. Input: natural language question.",
"method": "GET",
"url": "https://api.your-kb.com/search",
"queryParameters": {
"q": "={{ $fromAI('query', 'The search query', 'string') }}",
"limit": "5"
},
"authentication": "Generic Credential Type",
"headers": {
"Authorization": "Bearer {{ $credentials.kbApiKey }}"
}
}
The $fromAI() function is n8n's way of letting the AI agent fill in parameter values dynamically.
Tool 2: Code Tool (custom JavaScript logic)
Add a Tool: Code node for complex business logic:
// Tool: classify_ticket
// Description: Classify a support ticket by category and priority.
// Input: ticket_text (string) - the customer's message
const ticketText = $fromAI('ticket_text', 'The customer message to classify', 'string');
// Classification logic
const categories = {
billing: ['invoice', 'charge', 'payment', 'refund', 'subscription'],
technical: ['error', 'bug', 'crash', 'not working', 'broken', 'failed'],
account: ['password', 'login', 'access', 'email', '2fa'],
feature: ['request', 'suggestion', 'would like', 'can you add'],
};
const priorities = {
urgent: ['outage', 'down', 'cannot access', 'all users', 'production'],
high: ['broken', 'failed', 'not working', 'urgent', 'asap'],
medium: ['slow', 'issue', 'problem', 'not as expected'],
low: ['question', 'how to', 'documentation', 'suggestion'],
};
const textLower = ticketText.toLowerCase();
let category = 'general';
let maxMatches = 0;
for (const [cat, keywords] of Object.entries(categories)) {
const matches = keywords.filter(k => textLower.includes(k)).length;
if (matches > maxMatches) {
maxMatches = matches;
category = cat;
}
}
let priority = 'low';
for (const [prio, keywords] of Object.entries(priorities)) {
if (keywords.some(k => textLower.includes(k))) {
priority = prio;
break;
}
}
return { category, priority, word_count: ticketText.split(' ').length };
Tool 3: PostgreSQL Tool (database lookup)
Add a Tool: Postgres node for account lookups:
-- Tool: lookup_customer_account
-- Description: Look up customer account by email address. Returns plan, status, and billing info.
-- Input: customer_email (string)
SELECT
account_id,
email,
plan_name,
account_status,
billing_date,
monthly_spend,
support_tier
FROM customers
WHERE email = '{{ $fromAI("customer_email", "Customer email address", "string") }}'
LIMIT 1;
Step 4: Webhook Trigger and Async Response#
For production, handle long-running agent workflows asynchronously:
[Webhook Trigger]
|
v
[Respond to Webhook] ← Send immediate 202 response with job_id
|
v
[AI Agent Node] ← Runs asynchronously in worker queue
|
v
[Switch: Success/Error]
| |
v v
[HTTP Request] [HTTP Request]
(callback URL) (error webhook)
Webhook Trigger configuration:
- HTTP Method: POST
- Path:
/support-ticket - Response Mode: Using "Respond to Webhook" node
Respond to Webhook node (immediate response):
{
"statusCode": 202,
"body": {
"status": "processing",
"job_id": "={{ $execution.id }}",
"message": "Your request is being processed. You will receive a callback when complete.",
"estimated_time_seconds": 30
}
}
Callback HTTP Request node (after agent completes):
{
"method": "POST",
"url": "={{ $json.callback_url }}",
"body": {
"job_id": "={{ $execution.id }}",
"status": "completed",
"result": "={{ $json.output }}",
"tools_used": "={{ $json.intermediateSteps.length }}"
}
}
Step 5: Error Handling#
n8n has built-in error handling that works well for agent workflows:
Node-level error handling:
On any tool node, click "Error Output" to add a fallback path:
[HTTP Request Tool: search_kb]
| |
v (success) v (error)
[AI Agent] [Set: error_context]
|
v
[AI Agent] ← Retry with error context in prompt
Workflow-level error trigger:
Add an Error Trigger workflow that catches all failures:
// In the Error Trigger workflow - Code node
const error = $json.error;
const workflowName = $json.workflow.name;
const executionId = $json.execution.id;
// Send to Slack
const slackMessage = {
text: `Agent workflow failed: ${workflowName}`,
blocks: [
{
type: "section",
text: {
type: "mrkdwn",
text: `*Workflow:* ${workflowName}\n*Error:* ${error.message}\n*Execution:* ${executionId}`
}
}
]
};
return slackMessage;
Retry logic with exponential backoff:
Use the Wait node between retry attempts:
- After first failure: Wait 5 seconds
- After second failure: Wait 30 seconds
- After third failure: Route to error handler
Configure this with n8n's Loop Over Items node and a counter variable in workflow static data.
Common Issues and Solutions#
Issue: AI Agent node times out on long tasks
Increase EXECUTIONS_TIMEOUT in your Docker environment to 600+ seconds. For tasks that regularly take over 2 minutes, implement the async callback pattern from Step 4.
Issue: Agent calls the wrong tool or calls none
Improve tool descriptions. The AI model reads the description to decide when to use a tool. Be explicit: "Use ONLY for customer email lookup. Do NOT use for product questions." Also verify tool names don't conflict with each other.
Issue: Memory fills up and degrades performance
Switch from Buffer Window Memory (keeps raw messages) to Summary Memory (compresses old context). For high-volume workflows, disable memory entirely and pass only the current message — most support queries don't need prior context.
Issue: n8n execution queue backs up
Scale workers: docker compose up -d --scale n8n-worker=5. Monitor queue depth in n8n's execution log. For burst traffic, set QUEUE_WORKER_TIMEOUT to prevent workers from holding stalled jobs.
Production Considerations#
Observability: Enable n8n's built-in execution log (kept 90 days on Cloud). For custom metrics, add a Code node at the end of each workflow that sends execution stats to your monitoring system.
Credential security: Store all API keys in n8n credentials, never in workflow nodes directly. Use environment variable credentials for keys that rotate frequently.
Workflow versioning: n8n Cloud keeps workflow history. For self-hosted, export workflows as JSON to version control after each significant change.
Cost management: Track token usage by adding a Code node that logs input/output token counts from the AI Agent node's response metadata. Set hard limits using the agent's maxIterations setting.
Next Steps#
- Connect n8n to LangFuse for observability
- Build a research agent using n8n's HTTP tools
- Learn human-in-the-loop patterns for agent approval workflows
- Explore tool calling mechanics to write better tool descriptions
- Review agentic RAG to add document search to n8n agents