🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Tutorials/Build A2A Agents with Google ADK (2026)
intermediate16 min read

Build A2A Agents with Google ADK (2026)

Learn how to build A2A Protocol-compliant agents using Google's Agent Development Kit (ADK). This tutorial covers ADK's native A2A support, multi-agent coordination patterns, Agent Card configuration, and deploying A2A-connected agents to Vertex AI Agent Engine.

Abstract technology architecture representing Google ADK multi-agent coordination
Photo by Fakurian Design on Unsplash
By AI Agents Guide Team•March 1, 2026

Table of Contents

  1. Prerequisites
  2. Installation
  3. Part 1: Creating an A2A-Exposed ADK Agent
  4. Define a Simple ADK Agent with A2A Support
  5. Start the A2A Server Locally
  6. Using ADK's Dev Server for A2A Testing
  7. Part 2: Building an Orchestrator That Calls External A2A Agents
  8. Part 3: Multi-Agent Coordination Patterns with A2A
  9. Sequential A2A Pipeline
  10. Parallel A2A Research
  11. Dynamic Agent Selection
  12. Part 4: Handling Task State and Streaming
  13. Part 5: Deploying to Vertex AI Agent Engine
  14. Observability and Debugging
  15. Security Considerations
  16. Next Steps
Team collaborating on multi-agent system architecture with Google ADK
Photo by Campaign Creators on Unsplash

Building A2A Agents with Google ADK

Google's Agent Development Kit (ADK) provides first-class support for the A2A Protocol, making it the most integrated path to building A2A-compliant multi-agent systems on Google Cloud. This tutorial shows you how to build agents that expose A2A endpoints using ADK, coordinate with external A2A agents, and deploy multi-agent pipelines to Vertex AI.

If you're new to Google ADK, start with the Google ADK tutorial to understand the fundamentals before adding A2A.

Prerequisites#

  • Python 3.11+
  • Google Cloud project with Vertex AI API enabled
  • gcloud CLI authenticated
  • Familiarity with Google ADK basics

Installation#

pip install google-adk[a2a] google-cloud-aiplatform

ADK's [a2a] extra installs the A2A SDK, SSE support, and the authentication middleware for Vertex AI-integrated A2A deployments.

Part 1: Creating an A2A-Exposed ADK Agent#

Define a Simple ADK Agent with A2A Support#

# research_agent.py
import google.adk as adk
from google.adk.agents import LlmAgent
from google.adk.tools import google_search, FunctionTool
from google.adk.a2a import A2AServer, AgentMetadata, SkillDefinition
import json


def analyze_search_results(query: str, results: list[dict]) -> dict:
    """Analyze search results and return structured insights."""
    return {
        "query": query,
        "result_count": len(results),
        "top_sources": [r.get("url", "") for r in results[:3]],
    }


research_agent = LlmAgent(
    name="research-agent",
    description=(
        "Performs web research on any topic, synthesizes information from multiple "
        "sources, and returns structured research reports with citations."
    ),
    model="gemini-2.0-flash",
    instruction="""You are a research specialist. When given a research task:
    1. Use Google Search to find relevant, current information
    2. Synthesize information from multiple sources
    3. Return a structured report with key findings and citations
    4. Always cite your sources with URLs""",
    tools=[
        google_search,
        FunctionTool(analyze_search_results),
    ],
)

# Wrap with A2A server capabilities
a2a_server = A2AServer(
    agent=research_agent,
    metadata=AgentMetadata(
        version="1.0.0",
        skills=[
            SkillDefinition(
                id="web-research",
                name="Web Research",
                description=(
                    "Researches a topic using Google Search and returns a structured "
                    "report with findings and citations. Accepts natural language research "
                    "queries and returns JSON with summary, key_findings, and sources."
                ),
                input_modes=["text/plain"],
                output_modes=["application/json", "text/plain"],
                examples=[
                    "Research the latest developments in AI agent security",
                    "What are the best practices for implementing OAuth 2.1?",
                    "Find recent papers on multi-agent reinforcement learning",
                ],
            ),
        ],
    ),
)

Start the A2A Server Locally#

# main.py
import uvicorn
from research_agent import a2a_server

if __name__ == "__main__":
    app = a2a_server.build_fastapi_app()
    uvicorn.run(app, host="0.0.0.0", port=8080)
# Test the agent card
python main.py &
curl http://localhost:8080/.well-known/agent.json | python -m json.tool

You should see the automatically generated Agent Card including your skill definitions and the agent's capabilities.

Using ADK's Dev Server for A2A Testing#

# Start the ADK web server with A2A support
adk web research_agent.py --a2a

# The dev server exposes:
# - Web UI at http://localhost:8000
# - A2A endpoint at http://localhost:8000/a2a
# - Agent Card at http://localhost:8000/.well-known/agent.json

Part 2: Building an Orchestrator That Calls External A2A Agents#

The real power of ADK's A2A integration is the RemoteA2AAgent — an ADK tool that wraps any external A2A agent, making it callable from your orchestrator as naturally as a local function.

# orchestrator_agent.py
import google.adk as adk
from google.adk.agents import LlmAgent
from google.adk.a2a import RemoteA2AAgent, A2AClientConfig
from google.adk.tools import FunctionTool


# Configure external A2A agents
research_remote = RemoteA2AAgent(
    name="research_specialist",
    agent_card_url="https://research-agent.example.com/.well-known/agent.json",
    auth_config=A2AClientConfig(
        auth_type="oauth2",
        token_url="https://auth.example.com/token",
        client_id="orchestrator-client",
        client_secret_env="RESEARCH_AGENT_SECRET",
        scopes=["research:read"],
    ),
    description="Performs web research and returns structured reports with citations.",
)

data_analysis_remote = RemoteA2AAgent(
    name="data_analyst",
    agent_card_url="https://data-agent.example.com/.well-known/agent.json",
    auth_config=A2AClientConfig(
        auth_type="oauth2",
        token_url="https://auth.example.com/token",
        client_id="orchestrator-client",
        client_secret_env="DATA_AGENT_SECRET",
        scopes=["data:read", "data:analyze"],
    ),
    description="Analyzes structured data, generates statistics, and creates visualizations.",
)

email_remote = RemoteA2AAgent(
    name="email_drafter",
    agent_card_url="https://email-agent.example.com/.well-known/agent.json",
    auth_config=A2AClientConfig(
        auth_type="apikey",
        api_key_env="EMAIL_AGENT_API_KEY",
        api_key_header="X-API-Key",
    ),
    description="Drafts professional emails and reports based on research findings.",
)


def format_final_report(research: str, analysis: str) -> dict:
    """Combine research and analysis into a final report structure."""
    return {
        "research_findings": research,
        "data_analysis": analysis,
        "status": "ready_for_review",
    }


# Orchestrator uses remote A2A agents as tools
orchestrator = LlmAgent(
    name="market-intelligence-orchestrator",
    description="Orchestrates research, data analysis, and report generation for market intelligence tasks.",
    model="gemini-2.0-pro",
    instruction="""You are a market intelligence orchestrator. For each request:
    1. Use research_specialist to gather information on the topic
    2. Pass relevant data to data_analyst for quantitative analysis
    3. Use email_drafter to prepare a professional summary
    4. Combine results into a final report

    Always delegate to specialized agents rather than answering directly.
    Coordinate agents in the order: research → analysis → reporting.""",
    tools=[
        research_remote,
        data_analysis_remote,
        email_remote,
        FunctionTool(format_final_report),
    ],
)

Part 3: Multi-Agent Coordination Patterns with A2A#

Sequential A2A Pipeline#

from google.adk.agents import SequentialAgent
from google.adk.a2a import RemoteA2AAgent

# Sequential pipeline: each agent's output feeds the next
pipeline = SequentialAgent(
    name="research-pipeline",
    description="Sequential research, analysis, and reporting pipeline",
    sub_agents=[
        research_remote,    # Step 1: Research
        data_analysis_remote,  # Step 2: Analyze research output
        email_remote,       # Step 3: Draft report from analysis
    ],
)

Parallel A2A Research#

from google.adk.agents import ParallelAgent

# Run multiple research agents simultaneously
parallel_research = ParallelAgent(
    name="parallel-market-research",
    description="Simultaneously researches multiple market segments",
    sub_agents=[
        RemoteA2AAgent(
            name="competitor_researcher",
            agent_card_url="https://research-agent.example.com/.well-known/agent.json",
            task_override="Focus on competitor analysis and market positioning",
        ),
        RemoteA2AAgent(
            name="trends_researcher",
            agent_card_url="https://research-agent.example.com/.well-known/agent.json",
            task_override="Focus on industry trends and emerging technologies",
        ),
        RemoteA2AAgent(
            name="customer_researcher",
            agent_card_url="https://research-agent.example.com/.well-known/agent.json",
            task_override="Focus on customer sentiment and needs analysis",
        ),
    ],
    merge_strategy="concatenate",  # Combine all outputs for the next stage
)

Dynamic Agent Selection#

from google.adk.agents import LlmAgent
from google.adk.a2a import A2AAgentRegistry, DynamicA2ARouter

# Registry of available A2A agents
registry = A2AAgentRegistry(
    agent_card_urls=[
        "https://research-agent.example.com/.well-known/agent.json",
        "https://legal-agent.example.com/.well-known/agent.json",
        "https://finance-agent.example.com/.well-known/agent.json",
        "https://hr-agent.example.com/.well-known/agent.json",
    ]
)

# Dynamic router selects the best agent for each task
router = DynamicA2ARouter(
    orchestrator_model="gemini-2.0-pro",
    registry=registry,
    routing_strategy="capability-match",  # LLM matches task to agent skills
)

Part 4: Handling Task State and Streaming#

# Handling streaming responses from remote A2A agents
async def run_streaming_task():
    async with research_remote.stream_task(
        "Research the latest AI agent security frameworks published in 2026"
    ) as stream:
        async for event in stream:
            if event.type == "chunk":
                print(event.delta, end="", flush=True)
            elif event.type == "status":
                print(f"\n[Status: {event.status}]")
            elif event.type == "completed":
                print(f"\n[Completed. Artifacts: {len(event.artifacts)}]")
                return event.artifacts[0]

Part 5: Deploying to Vertex AI Agent Engine#

Vertex AI Agent Engine provides managed hosting for ADK agents with automatic A2A endpoint configuration:

# Build and deploy your A2A-enabled ADK agent
gcloud ai agents create \
  --display-name="Research Agent" \
  --region=us-central1 \
  --source=research_agent.py \
  --a2a-enabled \
  --auth-mode=google-iam

# After deployment, your agent is available at:
# https://{region}-aiplatform.googleapis.com/v1/projects/{project}/locations/{region}/agents/{agent-id}/a2a
# /.well-known/agent.json is automatically configured
# Configure IAM-based auth for Vertex AI A2A agents
from google.auth import default
from google.auth.transport.requests import Request

# Vertex AI A2A agents use Google IAM tokens
credentials, project = default()
credentials.refresh(Request())

vertex_agent = RemoteA2AAgent(
    name="vertex_research_agent",
    agent_card_url=(
        "https://us-central1-aiplatform.googleapis.com"
        "/v1/projects/my-project/locations/us-central1"
        "/agents/research-agent-123/.well-known/agent.json"
    ),
    auth_config=A2AClientConfig(
        auth_type="google-iam",
        service_account="orchestrator@my-project.iam.gserviceaccount.com",
    ),
)

Observability and Debugging#

ADK provides built-in tracing for A2A task flows:

from google.adk.tracing import configure_tracing
from google.cloud import trace_v1

# Enable Cloud Trace integration
configure_tracing(
    exporter="google-cloud-trace",
    project_id="my-project",
    trace_a2a_requests=True,  # Trace all A2A task requests and responses
    trace_llm_calls=True,
)

With tracing enabled, you can see the complete execution flow across all A2A agents in Cloud Trace, including latency breakdown per agent and per task.

Security Considerations#

When building A2A multi-agent systems with ADK:

  • Use Google Cloud IAM for agent-to-agent authentication within GCP environments
  • Apply least privilege principles — each remote agent should only receive the data it needs for its specific skill
  • Enable agent audit trails by configuring Cloud Audit Logs on your Vertex AI agents
  • Implement input validation before passing user data to remote A2A agents
  • Test your multi-agent pipeline with agent red teaming before production launch

For a comprehensive security guide for production A2A deployments, see Securing AI Agents.

Next Steps#

  • Review the A2A Protocol glossary entry for a full protocol overview
  • Build a custom A2A-compliant agent without ADK
  • Compare A2A vs Function Calling to understand the right tool for each use case
  • Explore MCP server integration for tool access alongside A2A agent coordination

Related Tutorials

How to Create a Meeting Scheduling AI Agent

Build an autonomous AI agent to handle meeting scheduling, calendar checks, and bookings intelligently. This step-by-step tutorial covers Python implementation with LangChain, Google Calendar integration, and advanced features like conflict resolution for efficient automation.

How to Manage Multiple AI Agents

Master managing multiple AI agents with this in-depth tutorial. Learn orchestration, state sharing, parallel execution, and scaling using LangGraph and custom tools. From basics to production-ready swarms for complex tasks.

How to Train an AI Agent on Your Own Data

Master training AI agents on custom data with three methods: context stuffing, RAG using vector databases, and fine-tuning. This beginner-to-advanced guide includes step-by-step code examples, pitfalls, and best practices to build knowledgeable agents for your specific needs.

← Back to All Tutorials