🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Examples/CrewAI Agent Examples: 6 Multi-Agent Crews
ExampleEngineering12 min read

CrewAI Agent Examples: 6 Multi-Agent Crews

Discover 6 concrete CrewAI multi-agent examples with full crew configurations, agent role definitions, and task pipelines. These real-world patterns show how to orchestrate specialized agents that collaborate to complete complex workflows.

Team of people collaborating around a table representing multi-agent coordination
Photo by Annie Spratt on Unsplash
By AI Agents Guide Team•February 28, 2026

Table of Contents

  1. Example 1: Content Research and Writing Crew
  2. Example 2: Competitive Intelligence Crew
  3. Example 3: Software Code Review Crew
  4. Example 4: Sales Outreach Personalization Crew
  5. Example 5: Financial Report Analysis Crew
  6. Example 6: Hierarchical Customer Success Crew
  7. Choosing the Right CrewAI Configuration
  8. Getting Started
  9. Frequently Asked Questions
Workflow diagram on a whiteboard showing multi-step process orchestration
Photo by Scott Graham on Unsplash

CrewAI's strength is role-based multi-agent orchestration. Instead of a single agent trying to do everything, you define a crew of specialists — a researcher, an analyst, a writer — each with their own tools, goals, and backstory. The crew then executes a defined task pipeline, passing outputs between agents like a well-coordinated team.

The mental model is intuitive, but the configuration details matter enormously. Poorly defined roles lead to agents stepping on each other; vague task descriptions lead to low-quality outputs. These six examples show what well-configured crews look like in practice, covering content pipelines, research workflows, software development, and more.

Start with the CrewAI tutorial to get your environment set up, then use these examples as patterns to adapt.


Example 1: Content Research and Writing Crew#

Use Case: Automate the full content creation pipeline from research to publication-ready draft, with specialized agents for each stage.

Architecture: Three agents in sequential process — Researcher (web search tools), Analyst (synthesizes findings), Writer (produces final article). Output of each task feeds the next.

Key Implementation:

from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, WebsiteSearchTool

search_tool = SerperDevTool()
web_tool = WebsiteSearchTool()

researcher = Agent(
    role="Senior Content Researcher",
    goal="Find accurate, current information on {topic} from authoritative sources",
    backstory="You are an expert researcher who finds and evaluates sources critically.",
    tools=[search_tool, web_tool],
    verbose=True,
    llm="gpt-4o-mini"
)

analyst = Agent(
    role="Content Strategist",
    goal="Synthesize research into a structured content outline with key insights",
    backstory="You turn raw research into clear narrative structures for writers.",
    verbose=True,
    llm="gpt-4o-mini"
)

writer = Agent(
    role="SEO Content Writer",
    goal="Write a 1500-word article optimized for search based on the outline",
    backstory="You write engaging, well-structured articles that rank in search engines.",
    verbose=True,
    llm="gpt-4o"
)

research_task = Task(
    description="Research the topic: {topic}. Find 5 authoritative sources.",
    expected_output="A list of 5 sources with key findings from each.",
    agent=researcher
)

outline_task = Task(
    description="Create a detailed article outline based on the research findings.",
    expected_output="A structured H2/H3 outline with bullet points for each section.",
    agent=analyst,
    context=[research_task]
)

writing_task = Task(
    description="Write the full article following the outline. Include an introduction and conclusion.",
    expected_output="A 1500-word article in markdown format ready for publication.",
    agent=writer,
    context=[outline_task]
)

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, outline_task, writing_task],
    process=Process.sequential,
    verbose=True
)

result = crew.kickoff(inputs={"topic": "AI agent frameworks in 2025"})

Outcome: A research-backed, publication-ready article produced in minutes. The sequential handoff ensures each agent builds on verified work from the previous stage.


Example 2: Competitive Intelligence Crew#

Use Case: Monitor competitor activity across pricing pages, blog posts, and product announcements, then produce a weekly intelligence brief.

Architecture: Scout agent (web scraper) + Analyst agent (pattern detection) + Reporter agent (formats briefing). Uses WebsiteSearchTool and custom scraping tools.

Key Implementation:

from crewai import Agent, Task, Crew, Process
from crewai_tools import WebsiteSearchTool, ScrapeWebsiteTool

scraper = ScrapeWebsiteTool()

scout = Agent(
    role="Competitive Intelligence Scout",
    goal="Scrape and collect recent updates from competitor websites: {competitors}",
    backstory="You methodically collect data from competitor sites without missing key changes.",
    tools=[scraper, WebsiteSearchTool()],
    llm="gpt-4o-mini"
)

analyst = Agent(
    role="Market Intelligence Analyst",
    goal="Identify strategic signals from competitor activity data",
    backstory="You detect pricing shifts, feature launches, and positioning changes that matter.",
    llm="gpt-4o"
)

reporter = Agent(
    role="Executive Communications Writer",
    goal="Produce a concise, actionable competitive intelligence brief",
    backstory="You distill complex competitive data into clear executive summaries.",
    llm="gpt-4o-mini"
)

collect_task = Task(
    description="Visit each competitor URL in {competitors} and collect: pricing changes, new features, blog posts from the last 7 days.",
    expected_output="Structured JSON with competitor name, URL, and list of changes.",
    agent=scout
)

analyze_task = Task(
    description="Analyze the collected data for strategic patterns, pricing moves, and positioning shifts.",
    expected_output="Analysis with 3-5 key strategic signals and their implications.",
    agent=analyst,
    context=[collect_task]
)

brief_task = Task(
    description="Write a one-page competitive intelligence brief suitable for a VP of Product.",
    expected_output="A markdown brief with sections: Key Moves, Threats, Opportunities, Recommended Actions.",
    agent=reporter,
    context=[analyze_task]
)

crew = Crew(
    agents=[scout, analyst, reporter],
    tasks=[collect_task, analyze_task, brief_task],
    process=Process.sequential
)

result = crew.kickoff(inputs={"competitors": ["competitor-a.com/pricing", "competitor-b.com/blog"]})

Outcome: Automated weekly competitive briefs that previously required hours of manual research. See AI Agent Research Examples for related patterns.


Example 3: Software Code Review Crew#

Use Case: Automated multi-perspective code review where different agents check for security vulnerabilities, performance issues, and style compliance.

Architecture: Three reviewer agents (Security, Performance, Style) run in parallel, then a Lead Reviewer agent synthesizes their findings into a final review report.

Key Implementation:

from crewai import Agent, Task, Crew, Process
from crewai_tools import CodeInterpreterTool

code_tool = CodeInterpreterTool()

security_reviewer = Agent(
    role="Security Engineer",
    goal="Identify security vulnerabilities in the provided code",
    backstory="You specialize in OWASP Top 10, injection attacks, and secrets leakage.",
    tools=[code_tool],
    llm="gpt-4o"
)

performance_reviewer = Agent(
    role="Performance Engineer",
    goal="Identify performance bottlenecks and inefficient patterns in the code",
    backstory="You optimize for time complexity, memory usage, and I/O efficiency.",
    tools=[code_tool],
    llm="gpt-4o"
)

style_reviewer = Agent(
    role="Code Quality Reviewer",
    goal="Evaluate code readability, maintainability, and adherence to best practices",
    backstory="You enforce clean code principles: DRY, single responsibility, clear naming.",
    llm="gpt-4o-mini"
)

lead_reviewer = Agent(
    role="Lead Code Reviewer",
    goal="Synthesize all review findings into a prioritized, actionable review report",
    backstory="You have 15 years of experience leading code reviews at top tech companies.",
    llm="gpt-4o"
)

# Tasks run in parallel, then synthesis runs last
security_task = Task(
    description="Review this code for security issues: {code}",
    expected_output="List of security findings with severity (Critical/High/Medium/Low) and line numbers.",
    agent=security_reviewer
)

performance_task = Task(
    description="Review this code for performance issues: {code}",
    expected_output="List of performance findings with estimated impact and suggested fixes.",
    agent=performance_reviewer
)

style_task = Task(
    description="Review this code for style and quality issues: {code}",
    expected_output="List of style issues with references to best practice guidelines.",
    agent=style_reviewer
)

synthesis_task = Task(
    description="Synthesize all review findings into a final pull request review.",
    expected_output="A structured PR review with sections: Must Fix, Should Fix, Suggestions, Approved/Rejected verdict.",
    agent=lead_reviewer,
    context=[security_task, performance_task, style_task]
)

crew = Crew(
    agents=[security_reviewer, performance_reviewer, style_reviewer, lead_reviewer],
    tasks=[security_task, performance_task, style_task, synthesis_task],
    process=Process.sequential  # CrewAI 0.30+ supports parallel task groups
)

Outcome: Consistent, comprehensive code reviews from multiple specialized perspectives. Compare with AI Agent Coding Examples for more development workflow patterns.


Workflow diagram on a whiteboard showing multi-step process orchestration

Example 4: Sales Outreach Personalization Crew#

Use Case: Generate highly personalized sales outreach emails at scale by researching each prospect, identifying pain points, and drafting a tailored message.

Architecture: Prospect Researcher + Pain Point Analyst + Copywriter, running sequentially per prospect batch.

Key Implementation:

from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, LinkedInTool

researcher = Agent(
    role="Prospect Research Specialist",
    goal="Research the prospect company and contact: {prospect_name} at {company}",
    backstory="You find recent news, company challenges, and individual priorities quickly.",
    tools=[SerperDevTool()],
    llm="gpt-4o-mini"
)

pain_analyst = Agent(
    role="B2B Pain Point Analyst",
    goal="Identify the top 3 business pain points this prospect likely faces based on research",
    backstory="You translate company context into specific, relevant business challenges.",
    llm="gpt-4o"
)

copywriter = Agent(
    role="B2B Sales Copywriter",
    goal="Write a personalized cold email referencing specific research and pain points",
    backstory="You write concise, non-generic emails that get replies. No platitudes.",
    llm="gpt-4o"
)

research_task = Task(
    description="Research {prospect_name}, {title} at {company}. Find: recent company news, growth stage, known challenges.",
    expected_output="A 200-word prospect profile with company context and individual priorities.",
    agent=researcher
)

pain_task = Task(
    description="Based on the research, identify the 3 most relevant pain points for our {product_category} solution.",
    expected_output="3 pain points with a 1-sentence rationale for each.",
    agent=pain_analyst,
    context=[research_task]
)

email_task = Task(
    description="Write a 150-word cold email to {prospect_name}. Reference specific research. Propose a 20-minute call.",
    expected_output="Subject line + email body. Conversational tone. No buzzwords.",
    agent=copywriter,
    context=[research_task, pain_task]
)

crew = Crew(
    agents=[researcher, pain_analyst, copywriter],
    tasks=[research_task, pain_task, email_task],
    process=Process.sequential
)

result = crew.kickoff(inputs={
    "prospect_name": "Sarah Chen",
    "title": "VP of Engineering",
    "company": "Acme Corp",
    "product_category": "developer tooling"
})

Outcome: Personalized outreach at scale that performs significantly better than templated emails. See LangChain vs CrewAI for when to use each framework.


Example 5: Financial Report Analysis Crew#

Use Case: Parse an earnings report PDF, extract key financial metrics, identify risks and opportunities, and produce an investment summary.

Architecture: Extractor agent (PDF parsing) + Financial Analyst (metrics interpretation) + Risk Assessor + Investment Writer, all in sequential process.

Key Implementation:

from crewai import Agent, Task, Crew, Process
from crewai_tools import PDFSearchTool

pdf_tool = PDFSearchTool(pdf="{report_path}")

extractor = Agent(
    role="Financial Data Extractor",
    goal="Extract key financial metrics from the earnings report PDF",
    backstory="You extract structured data from unstructured financial documents precisely.",
    tools=[pdf_tool],
    llm="gpt-4o-mini"
)

analyst = Agent(
    role="Financial Analyst",
    goal="Interpret the extracted metrics: revenue growth, margins, guidance, vs. analyst expectations",
    backstory="You have 10 years of equity research experience. You focus on what moves the stock.",
    llm="gpt-4o"
)

risk_assessor = Agent(
    role="Risk Assessment Specialist",
    goal="Identify risks, red flags, and forward-looking concerns from the report language",
    backstory="You read between the lines of management commentary to spot warning signs.",
    llm="gpt-4o"
)

writer = Agent(
    role="Investment Research Writer",
    goal="Produce a concise investment summary with a clear Buy/Hold/Sell framework",
    backstory="You write institutional-quality research notes that are clear and actionable.",
    llm="gpt-4o"
)

crew = Crew(
    agents=[extractor, analyst, risk_assessor, writer],
    tasks=[
        Task(description="Extract revenue, EPS, gross margin, guidance from {report_path}.",
             expected_output="Structured JSON with financial metrics.", agent=extractor),
        Task(description="Analyze the metrics vs. prior quarter and analyst consensus.",
             expected_output="Narrative analysis with beat/miss assessment.", agent=analyst),
        Task(description="Identify 3-5 risk factors from the MD&A section language.",
             expected_output="Risk list with likelihood and impact ratings.", agent=risk_assessor),
        Task(description="Write a 400-word investment summary with recommendation.",
             expected_output="Research note with thesis, risks, and Buy/Hold/Sell.", agent=writer)
    ],
    process=Process.sequential
)

Outcome: Institutional-quality earnings analysis in minutes per report instead of hours. This exemplifies how CrewAI's role separation improves both quality and auditability.


Example 6: Hierarchical Customer Success Crew#

Use Case: A customer success manager agent that dynamically routes support tickets to specialist sub-agents (Technical, Billing, Onboarding) based on ticket content.

Architecture: Manager agent (ticket router + final response) + three specialist agents. Uses Process.hierarchical so the manager LLM controls delegation.

Key Implementation:

from crewai import Agent, Task, Crew, Process
from langchain_openai import ChatOpenAI

manager_llm = ChatOpenAI(model="gpt-4o")

manager = Agent(
    role="Customer Success Manager",
    goal="Route and resolve customer tickets by delegating to the right specialist",
    backstory="You assess tickets, delegate to specialists, and ensure customers get complete answers.",
    llm="gpt-4o",
    allow_delegation=True
)

technical = Agent(
    role="Technical Support Specialist",
    goal="Resolve technical issues, API errors, and integration problems",
    backstory="You have deep product knowledge and can debug technical configurations.",
    llm="gpt-4o-mini",
    allow_delegation=False
)

billing = Agent(
    role="Billing Specialist",
    goal="Handle billing questions, invoices, upgrades, and refund requests",
    backstory="You know the billing system inside out and have authority to issue credits.",
    llm="gpt-4o-mini",
    allow_delegation=False
)

onboarding = Agent(
    role="Onboarding Specialist",
    goal="Guide new customers through setup, best practices, and first-value milestones",
    backstory="You help customers get productive quickly with personalized guidance.",
    llm="gpt-4o-mini",
    allow_delegation=False
)

resolution_task = Task(
    description="Resolve this customer ticket: {ticket_content}. Ensure the customer gets a complete, accurate answer.",
    expected_output="A complete, professional customer response that resolves the issue.",
    agent=manager
)

crew = Crew(
    agents=[manager, technical, billing, onboarding],
    tasks=[resolution_task],
    process=Process.hierarchical,
    manager_llm=manager_llm,
    verbose=True
)

result = crew.kickoff(inputs={"ticket_content": "I can't connect my Salesforce integration and I'm being charged for a plan I didn't upgrade to."})

Outcome: Dynamic ticket routing without hard-coded rules, with the manager agent making routing decisions based on content. For related patterns, see Human-in-the-Loop Agent Examples.


Choosing the Right CrewAI Configuration#

Sequential process works for most pipelines where steps have clear dependencies. Hierarchical process is worth the added complexity when you need dynamic routing between specialists. Keep crews small (2–4 agents) unless the workflow genuinely benefits from additional specialization.

The most important configuration decision is task expected_output. The more specific you are, the better the inter-agent handoffs work. Vague expected outputs lead to agents producing unusable inputs for downstream agents.

Getting Started#

The CrewAI tutorial covers environment setup and your first crew. Once you're comfortable with sequential crews, explore the OpenAI Agents SDK vs LangChain comparison to understand when a different framework might serve you better.

For production deployments, CrewAI integrates with LangSmith for observability, which is invaluable for debugging complex multi-agent interactions.

Frequently Asked Questions#

The FAQ section renders from the frontmatter faq array above.

Related Examples

Agentic RAG Examples: 5 Real Workflows

Six agentic RAG examples with working Python code covering query routing, self-correcting retrieval with hallucination detection, multi-document reranking, iterative retrieval with web fallback, conversational RAG with memory, and corrective RAG with grade-and-retry loops.

7 AI Agent Coding Examples (Real Projects)

Discover 7 real-world AI coding agent examples covering code review, PR generation, test writing, bug diagnosis, documentation generation, and refactoring automation. Each example includes architecture details and working code for engineering teams.

AI Data Analyst Examples: 6 Real Setups

Explore 6 AI data analyst agent examples covering natural language SQL generation, automated chart creation, anomaly detection, report generation, and business intelligence workflows. Includes Python code for building production-ready data analysis agents.

← Back to All Examples