šŸ¤–AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
šŸ¤–AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

Ā© 2026 AI Agents Guide. All rights reserved.

Home/Glossary/What Is the A2A Protocol?
Glossary8 min read

What Is the A2A Protocol?

The A2A (Agent-to-Agent) Protocol is an open standard developed by Google for enabling AI agents built by different vendors and frameworks to communicate, delegate tasks, and collaborate — without requiring shared infrastructure or custom integration code between them.

Abstract network connections representing agent-to-agent communication
Photo by NASA on Unsplash
By AI Agents Guide Team•February 28, 2026

Term Snapshot

Also known as: Agent-to-Agent Protocol, A2A Standard, Agent Communication Protocol

Related terms: What Is an MCP Server?, What Are Agent Cards? A2A Discovery Docs, What Is Function Calling in AI?, What Are AI Agents?

Table of Contents

  1. Quick Definition
  2. Why A2A Exists
  3. Core A2A Concepts
  4. Agent Cards
  5. Task Lifecycle
  6. Message Exchange Format
  7. Streaming and Push Notifications
  8. A2A vs. MCP: Complementary Roles
  9. A2A in Practice: Google ADK Implementation
  10. Real-World Use Cases
  11. Cross-Platform Enterprise Automation
  12. AI Software Development Workflow
  13. Customer Service Escalation
  14. Common Misconceptions
  15. Related Terms
  16. Frequently Asked Questions
  17. What is the A2A protocol?
  18. How does A2A differ from MCP?
  19. What is an Agent Card in A2A?
  20. What frameworks support the A2A protocol?
  21. Is A2A production-ready in 2026?
Team collaborating on multi-agent system architecture
Photo by KOBU Agency on Unsplash

What Is the A2A Protocol?

Quick Definition#

The A2A (Agent-to-Agent) Protocol is an open standard released by Google in April 2025 for enabling AI agents built by different vendors and frameworks to communicate, delegate tasks, and collaborate without requiring custom integration code. A2A defines how agents discover each other's capabilities, exchange tasks and results, and manage multi-step workflows across organizational and platform boundaries.

A2A is designed to complement — not replace — the Model Context Protocol (MCP). Where MCP connects agents to tools and data sources, A2A connects agents to other agents. Together, they form a complete interoperability stack for multi-agent systems.

If you are new to multi-agent concepts, start with Multi-Agent Systems and AI Agent Orchestration before reading this page. Browse all AI agent terms in the AI Agent Glossary.

Why A2A Exists#

As AI agent ecosystems matured through 2024 and 2025, a new interoperability problem emerged: multi-agent systems were proliferating, but agents built by different companies or on different frameworks couldn't communicate without custom glue code.

An enterprise deploying AI agents faced challenges like:

  • A Salesforce agent couldn't directly delegate to a ServiceNow agent
  • A LangGraph workflow couldn't send tasks to an AutoGen agent
  • An internal AI assistant couldn't call a third-party specialized agent without custom API work

A2A addresses this by defining a standard protocol for agent-to-agent communication, similar to how HTTP standardized human-browser-server communication and MCP standardized agent-tool communication.

Key use cases A2A enables:

  • An orchestrator agent delegating subtasks to specialized agents (coding, research, analysis)
  • Enterprise AI workflows spanning multiple vendors' agent platforms
  • Human-in-the-loop workflows where a human-facing agent coordinates multiple backend agents
  • Dynamic discovery of remote agents without pre-configuration

Core A2A Concepts#

Agent Cards#

The foundation of A2A discovery is the Agent Card — a JSON document served at a standard URL (/.well-known/agent.json) that describes what an agent can do:

{
  "name": "Code Review Agent",
  "description": "Reviews code for security vulnerabilities, style issues, and bugs",
  "version": "1.0.0",
  "url": "https://code-agent.example.com",
  "capabilities": {
    "streaming": true,
    "pushNotifications": false
  },
  "skills": [
    {
      "id": "review_code",
      "name": "Review Code",
      "description": "Perform a comprehensive code review",
      "inputModes": ["text"],
      "outputModes": ["text"],
      "examples": ["Review this Python function for security issues"]
    }
  ],
  "authentication": {
    "schemes": ["Bearer"]
  }
}

Any A2A client can fetch this Agent Card and learn exactly how to communicate with the agent, what it can do, and what authentication it requires — without prior configuration.

Task Lifecycle#

A2A defines a standardized task model with explicit lifecycle states:

submitted → working → (input-required → working) → completed | failed | canceled
  • submitted: Task received, not yet started
  • working: Agent actively processing
  • input-required: Agent needs additional information from the caller (human-in-the-loop)
  • completed: Task finished successfully with results
  • failed: Task could not be completed
  • canceled: Task was canceled by the client

This lifecycle model handles both fast synchronous tasks (question answering) and long-running asynchronous tasks (research, document generation) with the same protocol.

Message Exchange Format#

A2A uses a structured message format with typed content parts:

# A2A client sending a task
import httpx
import json

async def delegate_code_review(agent_url: str, code: str):
    task = {
        "id": "task-001",
        "message": {
            "role": "user",
            "parts": [
                {
                    "type": "text",
                    "text": f"Review this code for security vulnerabilities:\n\n{code}"
                }
            ]
        }
    }

    async with httpx.AsyncClient() as client:
        response = await client.post(
            f"{agent_url}/tasks/send",
            json=task,
            headers={"Authorization": "Bearer YOUR_TOKEN"}
        )

    result = response.json()
    return result["result"]["parts"][0]["text"]

Streaming and Push Notifications#

A2A supports two response modes for long-running tasks:

Streaming (SSE): The agent sends partial results as a Server-Sent Events stream, useful for tasks where you want to show progress (e.g., document drafting)

Push Notifications: The agent sends a webhook callback when the task completes, useful for fire-and-forget workflows where the client doesn't maintain an open connection

A2A vs. MCP: Complementary Roles#

DimensionA2AMCP
Primary connectionAgent ↔ AgentAgent ↔ Tool/Resource
Use caseMulti-agent delegationTool and data access
Capability unitSkills (tasks)Tools, Resources, Prompts
DiscoveryAgent CardsTool listing
State managementFull task lifecycleStateless tool calls
StreamingBuilt-inLimited
Released byGoogle (April 2025)Anthropic (November 2024)
TransportHTTPstdio, HTTP

In practice: A production multi-agent system might use both protocols — MCP for each agent's access to tools and data, A2A for agents to collaborate with each other.

Orchestrator Agent
ā”œā”€ā”€ MCP → filesystem server (reads project files)
ā”œā”€ā”€ MCP → database server (queries results)
ā”œā”€ā”€ A2A → Code Review Agent (delegates review tasks)
└── A2A → Documentation Agent (delegates doc generation)

A2A in Practice: Google ADK Implementation#

Google's Agent Development Kit (ADK) provides native A2A support:

from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.tools.agent_tool import AgentTool

# Define a specialized research subagent
research_agent = LlmAgent(
    name="ResearchAgent",
    model="gemini-2.0-flash",
    instruction="You are a research specialist. Find comprehensive information on topics.",
    tools=[google_search_tool, academic_search_tool],
)

# Wrap the subagent as an A2A-compatible tool for an orchestrator
research_tool = AgentTool(agent=research_agent)

# Orchestrator agent that delegates to the research agent via A2A
orchestrator = LlmAgent(
    name="OrchestratorAgent",
    model="gemini-2.0-flash",
    instruction="You coordinate research and writing tasks.",
    tools=[research_tool, writing_tool],
)

# The orchestrator can now delegate research tasks to the research agent
# following A2A task lifecycle semantics
runner = Runner(agent=orchestrator, session_service=session_service)

Real-World Use Cases#

Cross-Platform Enterprise Automation#

A financial services firm uses an orchestrator agent to process loan applications. The orchestrator delegates via A2A to:

  • A credit check agent (built by a fintech vendor on their platform)
  • A compliance verification agent (built on the firm's LangGraph infrastructure)
  • A document extraction agent (a third-party specialized service)

Without A2A, each integration would require custom API code. With A2A, the orchestrator calls each agent through the same protocol.

AI Software Development Workflow#

A development team builds a coding assistant that uses A2A to coordinate:

  • A code generation agent (writes implementation)
  • A code review agent (reviews for issues)
  • A test generation agent (writes tests)
  • A documentation agent (generates docs)

Each agent is independently deployable and upgradeable, but the orchestrator coordinates them through a standard protocol.

Customer Service Escalation#

A customer service agent handles routine inquiries directly, but uses A2A to delegate:

  • Complex billing disputes to a specialized billing agent
  • Technical issues to a technical support agent
  • Escalated complaints to a human-in-the-loop workflow

Common Misconceptions#

Misconception: A2A replaces MCP A2A and MCP solve different problems. MCP connects agents to tools and data. A2A connects agents to agents. They are designed to work together, not compete.

Misconception: A2A requires Google infrastructure A2A is an open protocol, not a Google-specific platform. The specification is open-source, and any vendor, framework, or team can implement A2A clients and servers. Google ADK provides the reference implementation.

Misconception: A2A is only for large-scale enterprise deployments While A2A's standardization benefits are most valuable at enterprise scale, even small multi-agent systems benefit from the task lifecycle management, capability discovery, and structured communication A2A provides.

Related Terms#

  • Multi-Agent Systems — The architecture A2A enables at scale
  • AI Agent Orchestration — Coordinating multiple agents
  • Model Context Protocol (MCP) — The complementary tool-connection protocol
  • Agent Handoff — How agents transfer control (single-platform)
  • AI Agents — The building blocks A2A connects
  • Multi-Agent Systems Guide — Comprehensive guide to multi-agent architectures and coordination patterns
  • AI Agents vs Chatbots — Understanding AI agent capabilities in enterprise contexts

Frequently Asked Questions#

What is the A2A protocol?#

A2A (Agent-to-Agent) is an open standard released by Google in April 2025 for enabling AI agents from different vendors and frameworks to communicate, delegate tasks, and collaborate. It defines Agent Cards for capability discovery, a structured task lifecycle, and a standard message format — making multi-agent systems interoperable without custom integration code.

How does A2A differ from MCP?#

MCP standardizes agent-to-tool connections (databases, APIs, file systems). A2A standardizes agent-to-agent connections (delegating tasks between agents). They are complementary: a production multi-agent system typically uses MCP for tool access and A2A for inter-agent communication.

What is an Agent Card in A2A?#

An Agent Card is a JSON document at /.well-known/agent.json that describes an agent's capabilities, skills, input/output formats, and authentication requirements. Other agents fetch this document to discover what a remote agent can do and how to communicate with it.

What frameworks support the A2A protocol?#

Google ADK has native A2A support as the reference implementation. LangGraph and CrewAI have added A2A compatibility. 50+ partner companies including Salesforce, SAP, and ServiceNow have committed to A2A support. Community libraries exist for Python and TypeScript implementations across major agent frameworks.

Is A2A production-ready in 2026?#

The A2A specification is stable and Google ADK provides a solid reference implementation. Production readiness depends on your use case: for greenfield multi-agent architectures, A2A is a strong choice. Enterprise adoption patterns and security best practices are still actively developing as the ecosystem matures.

Tags:
a2aarchitectureprotocols

Related Glossary Terms

What Is A2A Agent Discovery? (Guide)

A2A Agent Discovery is the process by which AI agents find, register, and verify the capabilities of peer agents using Agent Cards and well-known URIs in the A2A Protocol. It enables dynamic, decentralized multi-agent coordination without hardcoded routing logic.

What Are Agent Cards? A2A Discovery Docs

Agent Cards are the JSON discovery documents at the heart of Google's A2A Protocol. They describe an agent's capabilities, supported modalities, authentication requirements, and API endpoints — enabling automatic discovery and interoperability across multi-agent systems.

What Is MCP Transport?

MCP transport is the communication layer that carries messages between an MCP client and an MCP server. The three transports are stdio (local subprocess), HTTP with Server-Sent Events, and WebSocket — each suited for different deployment scenarios.

What Is AI Agent Threat Modeling?

AI Agent Threat Modeling is the systematic process of identifying, categorizing, and mitigating security risks unique to autonomous AI agents — including prompt injection, tool abuse, privilege escalation, and data exfiltration through agent outputs. Learn the frameworks and techniques used by security teams deploying agents in production.

← Back to Glossary