🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Integrations/How to Integrate AI Agents with Intercom
IntegrationIntercomintermediate10 min readSetup: 25-35 minutes

How to Integrate AI Agents with Intercom

Step-by-step guide to connecting AI agents with Intercom. Learn how to automate customer support triage, draft AI-powered replies, classify conversations, and escalate complex tickets using LangChain, n8n, and the Intercom REST API.

Customer support headset representing Intercom AI agent integration for support automation
Photo by Petr Macháček on Unsplash
By AI Agents Guide Team•February 28, 2026

Table of Contents

  1. What AI Agents Can Do With Intercom Access
  2. Setting Up Intercom API Access
  3. Option 1: No-Code with n8n
  4. New Conversation Triage Workflow
  5. Option 2: LangChain with Python
  6. Build Intercom Tools
  7. Support Triage Agent
  8. Webhook Integration for Real-Time Triage
  9. Rate Limits and Best Practices
  10. Next Steps
Customer service team representing Intercom conversation automation workflow
Photo by Christina Morillo on Unsplash

Intercom is where customer conversations happen — and AI agents connected to Intercom can dramatically reduce first response time, improve routing accuracy, and equip human agents with AI-drafted replies that require minimal editing. For support teams handling high conversation volumes, Intercom AI integration is one of the fastest paths to measurable productivity gains.

This guide covers building an automated triage agent, drafting reply workflows, and setting up webhook-triggered automation.

What AI Agents Can Do With Intercom Access#

Conversation Triage

  • Classify new conversations by topic (billing, technical, onboarding, cancellation)
  • Assign priority (urgent, normal, low) based on message content and customer segment
  • Route to the correct team automatically (billing conversations → finance team, bugs → engineering)
  • Add tags for downstream analytics and filtering

AI-Drafted Replies

  • Generate a draft reply based on the customer's question and your help center content
  • Post the draft as an internal note for human agents to review and send
  • Include relevant help center article links in draft responses
  • Handle common FAQ topics with templated responses

Conversation Management

  • Close resolved conversations automatically when customers confirm resolution
  • Alert team leads when SLA response time is at risk
  • Summarize long conversation threads for agents picking up mid-conversation
  • Flag churn-risk conversations based on message sentiment

Setting Up Intercom API Access#

pip install langchain langchain-openai requests python-dotenv

export INTERCOM_ACCESS_TOKEN="your-intercom-access-token"

Get your access token from Intercom Developer Hub → Your App → Configure → Authentication.


Option 1: No-Code with n8n#

New Conversation Triage Workflow#

  1. Webhook Trigger: Receive Intercom conversation.user.created events
  2. OpenAI: "Classify this customer message: {category: billing|technical|onboarding|general, priority: urgent|normal|low, summary: ''}"
  3. HTTP Request: Add tags to the Intercom conversation via API
  4. HTTP Request: Assign conversation to appropriate inbox based on category
  5. HTTP Request: Post internal note with AI classification and suggested reply

Option 2: LangChain with Python#

Build Intercom Tools#

import os
import requests
from langchain.tools import tool
from dotenv import load_dotenv

load_dotenv()

INTERCOM_TOKEN = os.getenv("INTERCOM_ACCESS_TOKEN")
HEADERS = {
    "Authorization": f"Bearer {INTERCOM_TOKEN}",
    "Accept": "application/json",
    "Content-Type": "application/json",
    "Intercom-Version": "2.10"
}
BASE_URL = "https://api.intercom.io"


@tool
def get_conversation(conversation_id: str) -> str:
    """Get full details of an Intercom conversation including messages and customer info."""
    response = requests.get(f"{BASE_URL}/conversations/{conversation_id}", headers=HEADERS)
    conv = response.json()

    messages = conv.get("conversation_parts", {}).get("conversation_parts", [])
    recent_messages = []
    for msg in messages[-5:]:  # Last 5 messages
        author_type = msg.get("author", {}).get("type", "unknown")
        body = msg.get("body", "").replace("<p>", "").replace("</p>", "\n")
        recent_messages.append(f"[{author_type}]: {body[:300]}")

    customer = conv.get("source", {}).get("author", {})
    return f"""Conversation {conversation_id}:
Customer: {customer.get('name', 'Unknown')} ({customer.get('email', 'no email')})
State: {conv.get('state', 'unknown')}
Recent messages:
{chr(10).join(recent_messages)}"""


@tool
def add_internal_note(conversation_id: str, note_text: str) -> str:
    """Add an internal note (visible only to support agents) to an Intercom conversation."""
    data = {
        "message_type": "note",
        "type": "admin",
        "body": f"<p>{note_text}</p>"
    }
    response = requests.post(
        f"{BASE_URL}/conversations/{conversation_id}/reply",
        headers=HEADERS,
        json=data
    )
    return "Internal note added" if response.status_code == 200 else f"Error: {response.text}"


@tool
def add_conversation_tags(conversation_id: str, tag_names: list) -> str:
    """Add tags to an Intercom conversation for classification and routing."""
    # First get tag IDs
    tags_response = requests.get(f"{BASE_URL}/tags", headers=HEADERS)
    existing_tags = {t["name"]: t["id"] for t in tags_response.json().get("data", [])}

    tagged = []
    for tag_name in tag_names:
        if tag_name in existing_tags:
            requests.post(
                f"{BASE_URL}/conversations/{conversation_id}/tags",
                headers=HEADERS,
                json={"id": existing_tags[tag_name]}
            )
            tagged.append(tag_name)
    return f"Tags added: {tagged}" if tagged else "No matching tags found"


@tool
def search_help_center(query: str) -> str:
    """Search Intercom Help Center articles for content relevant to a customer query."""
    response = requests.get(
        f"{BASE_URL}/articles",
        headers=HEADERS,
        params={"q": query, "per_page": 5}
    )
    articles = response.json().get("data", [])
    if not articles:
        return f"No help center articles found for: '{query}'"

    result = []
    for article in articles[:3]:
        title = article.get("title", "Untitled")
        url = article.get("url", "")
        result.append(f"- [{title}]({url})")
    return "Relevant help center articles:\n" + "\n".join(result)


@tool
def list_open_conversations(team_name: str = None) -> str:
    """List recent open conversations, optionally filtered by team."""
    params = {"state": "open", "display_as": "plaintext", "per_page": 20}
    response = requests.get(f"{BASE_URL}/conversations", headers=HEADERS, params=params)
    convs = response.json().get("data", [])

    result = [f"Open conversations ({len(convs)} shown):"]
    for conv in convs[:10]:
        customer = conv.get("source", {}).get("author", {}).get("email", "unknown")
        waiting_since = conv.get("waiting_since", 0)
        result.append(f"- ID: {conv['id']} | Customer: {customer} | Waiting: {waiting_since}s")
    return "\n".join(result)

Support Triage Agent#

from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o", temperature=0.1)
tools = [get_conversation, add_internal_note, add_conversation_tags,
         search_help_center, list_open_conversations]

prompt = ChatPromptTemplate.from_messages([
    ("system", """You are a customer support specialist assistant with access to Intercom.

When a new conversation arrives:
1. Read the conversation to understand the customer's issue
2. Search the help center for relevant articles
3. Add appropriate classification tags (billing, technical, onboarding, general)
4. Draft a helpful reply and add it as an internal note for the human agent
5. If urgent (payment issue, data loss, security), flag it prominently

Draft reply guidelines:
- Be empathetic and professional
- Include specific, actionable next steps
- Link to relevant help center articles
- Keep replies concise and clear
- Never make promises you can't keep"""),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=6)

# Process a new conversation
result = executor.invoke({
    "input": "Process new conversation ID: 123456789. Classify it, find relevant help articles, and draft a reply for the human agent."
})
print(result["output"])

Webhook Integration for Real-Time Triage#

from flask import Flask, request, jsonify
import hmac, hashlib

app = Flask(__name__)
INTERCOM_CLIENT_SECRET = os.getenv("INTERCOM_CLIENT_SECRET")


def verify_intercom_signature(payload: bytes, signature: str) -> bool:
    expected = hmac.new(
        INTERCOM_CLIENT_SECRET.encode(),
        payload,
        hashlib.sha1
    ).hexdigest()
    return hmac.compare_digest(f"sha1={expected}", signature)


@app.route('/intercom/webhook', methods=['POST'])
def intercom_webhook():
    signature = request.headers.get('X-Hub-Signature', '')
    if not verify_intercom_signature(request.data, signature):
        return jsonify({"error": "Invalid signature"}), 401

    event = request.json
    topic = event.get("topic", "")
    data = event.get("data", {})

    if topic == "conversation.user.created":
        conversation_id = data.get("item", {}).get("id")
        if conversation_id:
            # Triage in background thread
            import threading
            def triage():
                executor.invoke({
                    "input": f"Triage new conversation {conversation_id}: classify, search help center, add note with draft reply"
                })
            threading.Thread(target=triage).start()

    return jsonify({"received": True})

Rate Limits and Best Practices#

Intercom API limitValue
Standard tier83 requests/10 seconds
Plus/Premium tier500 requests/10 seconds
Webhook deliveryRetry with exponential backoff

Best practices:

  • Don't auto-send replies: Always use internal notes for AI-drafted responses; let humans send
  • Avoid reply loops: Check message author type before processing — skip bot/operator messages
  • Batch tag operations: When tagging multiple conversations, sequence requests with brief delays
  • Test on staging: Use a test Intercom workspace for development to avoid impacting live conversations

Next Steps#

  • AI Agents Slack Integration — Alert your team in Slack when Intercom SLAs are at risk
  • AI Agents Stripe Integration — Cross-reference payment data when handling billing conversations
  • AI Agents for Customer Service — Complete customer service agent tutorial
  • Human-in-the-Loop Agents — Why AI-drafted replies need human review

Related Integrations

How to Integrate AI Agents with Airtable

Step-by-step guide to connecting AI agents with Airtable. Learn how to automate record creation, data enrichment, workflow triggers, and database management using LangChain, n8n, and the Airtable REST API.

How to Integrate AI Agents with Asana

Step-by-step guide to connecting AI agents with Asana. Learn how to automate task creation, project updates, workload analysis, and deadline tracking using LangChain, n8n, and the Asana REST API.

AI Agents + Google BigQuery: Setup Guide

Step-by-step guide to connecting AI agents with Google BigQuery. Learn how to automate SQL queries, build analytics pipelines, detect anomalies, and generate business reports using LangChain, n8n, and the BigQuery Python SDK.

← Back to All Integrations