n8n occupies a unique position in the AI automation landscape. It is a code-optional workflow platform with native LLM integration, making it ideal for teams that need to connect AI reasoning with dozens of existing business tools — CRMs, databases, email, Slack, Google Sheets — without writing integration code from scratch.
These six examples show how to configure n8n AI agent workflows for real business problems. Each describes the node structure, key configuration settings, and the business logic the workflow implements. The configurations are described in terms of n8n's node vocabulary, which you can map directly to the visual workflow builder.
For code-first agent implementations, compare these with LangChain Agent Examples and CrewAI Agent Examples.
Example 1: Inbound Lead Qualification Agent#
Use Case: Automatically score and qualify inbound leads from a web form, enrich them with company data, and route high-quality leads to sales CRM while sending low-quality leads a nurture email.
Node Configuration:
Trigger: Webhook (POST /new-lead)
↓
HTTP Request: Clearbit Enrichment API (enrich company data by email domain)
↓
AI Agent Node (OpenAI GPT-4o-mini)
- System Prompt: "You are a B2B lead qualification specialist. Score this lead
1-10 based on: company size, industry fit, job title seniority, and form
response quality. Return JSON: {score: number, tier: 'hot|warm|cold',
reasoning: string, next_action: string}"
- Input: Merge of form data + enrichment data
↓
Switch Node (branch by tier)
├── hot/warm → HubSpot: Create Contact + Deal (Stage: Qualified)
│ Slack: Post to #sales-alerts channel
└── cold → Gmail: Send nurture email sequence (template: cold-nurture-v3)
Google Sheets: Log to lead tracking sheet
Key Configuration — AI Agent Node:
- Model:
gpt-4o-mini(cost-efficient for scoring) - Temperature:
0(consistent scoring) - Output Parser: JSON Schema validation on
{score, tier, reasoning, next_action} - Timeout: 10 seconds
- Retry on Fail: 3 attempts
Outcome: Leads scored and routed in under 5 seconds of form submission. Sales reps only see qualified leads; cold leads enter automated nurture sequences. Expected reduction in manual lead review time: 80%.
Example 2: Customer Support Ticket Triage and Response#
Use Case: Classify incoming support tickets, generate a draft response for simple issues, and escalate complex issues to the right team with full context.
Node Configuration:
Trigger: Gmail / Zendesk Webhook (new ticket)
↓
AI Chain Node (Classification)
- Model: gpt-4o-mini
- Prompt: "Classify this support ticket into one of:
[billing, technical_api, technical_ui, account_access, feature_request, other].
Also determine urgency: [critical, high, normal, low].
Output JSON: {category, urgency, summary_50_words}"
↓
Switch Node (branch by category)
├── billing → Stripe: Lookup customer subscription
│ AI Chain: Draft billing response with subscription context
├── technical_api → GitHub: Search recent issues for matching error
│ AI Chain: Draft response with known fixes
├── account_access → Auth0: Lookup account status
│ IF locked → Auto-unlock + draft confirmation email
└── feature_request → Linear: Create feature request issue
Draft: "Thanks, added to backlog" response
↓
Human Review Gate (n8n Wait node)
- Slack: Post draft to #support-review with Approve/Edit/Reject buttons
- Timeout: 30 minutes → auto-send if no response (normal urgency only)
↓
Zendesk: Send response + update ticket status
Key Configuration — Human Review Gate:
- Critical urgency: Skip auto-send, require human approval
- Normal/Low urgency: Auto-send after 30 minutes if no review action
- All responses logged to Google Sheets for quality tracking
Outcome: 60–70% of tickets auto-resolved with drafted responses. Human agents focus on complex cases. For the human review pattern, see Human-in-the-Loop Agent Examples.
Example 3: Automated Content Brief and SEO Outline Generator#
Use Case: Accept a target keyword, research competitor content, analyze search intent, and generate a comprehensive SEO content brief ready for writers.
Node Configuration:
Trigger: Google Form submission (keyword + target URL)
↓
HTTP Request: SerpAPI (fetch top 10 results for keyword)
↓
Loop Over Items: For each SERP result
└── HTTP Request: Fetch page HTML
AI Chain: Extract H2/H3 headings, word count, topic coverage
↓
AI Agent Node (GPT-4o)
- System Prompt: "You are an SEO content strategist. Analyze the SERP data
provided and generate a comprehensive content brief that will outrank
current results. Include: title tag, meta description, recommended word count,
H2/H3 outline, key statistics to include, FAQ section ideas, internal
linking suggestions."
- Context: All extracted SERP data (passed as JSON array)
↓
HTTP Request: Ahrefs API (keyword difficulty, monthly volume, related keywords)
↓
AI Chain: Incorporate keyword data into brief, add secondary keyword recommendations
↓
Google Docs: Create new document from template, fill with brief content
Slack: Notify content team with document link
Airtable: Log brief to content calendar
Key Configuration — Main AI Agent Node:
- Model:
gpt-4o(quality matters for strategic content briefs) - Temperature:
0.3(some creativity, mostly consistent) - Max Tokens:
4000 - System Context: Include site style guide as static context
Outcome: A complete, research-backed SEO content brief generated in 3–5 minutes. Writers receive a Google Doc with everything they need to create ranking content.
Example 4: E-commerce Order Exception Handler#
Use Case: Monitor order fulfillment for anomalies — late shipments, payment failures, inventory shortfalls — and take corrective action or notify operations teams automatically.
Node Configuration:
Trigger: Schedule (every 30 minutes)
↓
HTTP Request: E-commerce API (fetch orders with status changes in last 30 min)
↓
Filter Node: Orders with status in [payment_failed, fulfillment_delayed, out_of_stock]
↓
Loop Over Items: For each exception order
↓
AI Chain Node (GPT-4o-mini)
- Prompt: "Classify this order exception and recommend the best corrective action.
Order: {order_details}. Exception: {exception_type}.
Options: [retry_payment, contact_supplier, offer_substitute, issue_refund,
notify_customer, escalate_to_ops].
Output JSON: {action, message_to_customer, internal_note, escalate: bool}"
↓
Switch Node (by recommended action)
├── retry_payment → Stripe: Retry charge, log attempt
├── offer_substitute → Product DB: Find substitute SKU
│ Email: Send substitute offer to customer
├── issue_refund → Stripe: Process refund
│ Email: Send refund confirmation
└── escalate → Slack: Alert #ops-alerts with full context
Zendesk: Create priority ticket
↓
Airtable: Log resolution action + outcome
↓
Daily Schedule Trigger:
↓
AI Chain: Summarize exception patterns from Airtable log
Slack: Post daily exception report to #operations
Outcome: Exception resolution time drops from hours to minutes for common patterns. Operations teams focus on genuinely complex cases. See AI Agent E-commerce Examples for more e-commerce automation patterns.
Example 5: Sales Call Intelligence Pipeline#
Use Case: Process recorded sales call transcripts to extract action items, update CRM deal fields, and generate coaching feedback for sales managers.
Node Configuration:
Trigger: Webhook from Gong/Chorus (new call recording available)
↓
HTTP Request: Gong API (download transcript)
↓
AI Agent Node (GPT-4o) — Extraction
- Prompt: "Extract from this sales call transcript:
1. All action items with owner and deadline
2. Next steps agreed upon
3. Objections raised and how they were handled
4. Deal stage signals (budget confirmed, decision makers identified, etc.)
5. Sentiment: [positive, neutral, negative] with evidence
Output structured JSON."
↓
AI Chain Node (GPT-4o-mini) — Coaching
- Prompt: "Based on this call analysis, provide 3 specific coaching points
for the sales rep. Focus on: discovery quality, objection handling, next
step commitment. Be specific and constructive."
↓
Parallel Branches (merge after):
├── HubSpot: Update deal properties (stage, next steps, close date)
│ HubSpot: Create tasks for each action item
└── Notion: Create call summary page in sales wiki
Slack: Post coaching feedback to manager's DM (private)
↓
Google Sheets: Append to call quality tracking sheet
Weekly Schedule:
↓
AI Chain: Aggregate weekly patterns from tracking sheet
Slack: Post team coaching summary to #sales-management
Outcome: Every call automatically documented, CRM kept up to date without manual entry, and managers receive consistent coaching insights. Reps save 15–20 minutes of post-call admin per call.
Example 6: Automated Research Report Generator#
Use Case: Accept a research question, search multiple sources, synthesize findings into a structured report, and distribute it to stakeholders.
Node Configuration:
Trigger: Slack Command (/research "question") or Schedule (weekly topics list)
↓
AI Chain: Break question into 4-6 specific sub-questions for parallel research
↓
Loop Over Sub-questions (parallel execution):
└── HTTP Request: Tavily Search API (3 results per sub-question)
HTTP Request: SerpAPI News (recent news on sub-question)
AI Chain: Summarize and extract key facts from search results
↓
Merge Node: Combine all sub-question research summaries
↓
AI Agent Node (GPT-4o) — Report Writing
- System: "You are a research analyst writing an executive briefing.
Use only the provided research summaries as your source. Structure as:
Executive Summary (150 words) → Key Findings → Analysis → Implications →
Sources list. Minimum 1000 words total."
- Temperature: 0.2
↓
Google Docs: Create formatted report document
↓
Switch Node (by trigger type)
├── Slack Command → Reply in thread with Google Docs link
└── Scheduled → Email distribution list
Notion: Archive in research library
Outcome: Research reports that previously took half a day are generated in 5–8 minutes. The parallel sub-question approach ensures comprehensive coverage without sequential latency stacking.
Choosing the Right n8n AI Workflow Pattern#
n8n's visual approach makes it easiest to reason about linear pipelines with branch logic. For workflows with 3–8 nodes and clear integration requirements (CRM + email + Slack + database), n8n delivers results faster than code. The key architectural decision is where to place AI nodes: use them for classification, extraction, and generation tasks, but keep business logic (routing, conditions, data transformation) in native n8n nodes where it's more transparent and auditable.
For workflows requiring complex multi-turn reasoning or stateful agent loops, consider calling out to a Python-based agent via HTTP Request node rather than trying to implement everything in n8n.
Getting Started#
Install n8n via Docker (docker run -it --rm -p 5678:5678 n8nio/n8n) and configure your OpenAI API credentials. The AI Agent node and AI Chain node (for simpler single-step LLM calls) are available in n8n 1.0+. For code-first alternatives to these patterns, the LangChain tutorial and CrewAI tutorial cover equivalent functionality in Python.
For understanding what's happening under the hood in these workflows, What is an AI Agent explains the core loop that n8n's AI nodes implement visually.
Frequently Asked Questions#
The FAQ section renders from the frontmatter faq array above.