Munich Datageeks e.V.
Talk "MCP vs A2A: Which Protocol Is Better for AI Agents"
MCP vs A2A

Talk "MCP vs A2A: Which Protocol Is Better for AI Agents"

Datageeks Admin

MCP vs A2A: Which Protocol Is Better for AI Agents by Prajwal Nagaraja was presented at Munich Datageeks - April Edition 2025

Abstract

The Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocol offer distinct advantages for AI agent interaction and resource access. MCP acts as a universal connector, enabling AI models to interact with external data sources and tools, thus extending their functionality beyond training data. A2A, on the other hand, facilitates communication between independent AI agents, allowing them to collaborate regardless of their underlying frameworks. This talk will compare MCP and A2A, highlighting their core architectures, functionalities, and use cases. We will also explore how these protocols can be utilized to enhance generative AI (GenAI) applications, enabling more dynamic and interactive AI-driven solutions.

About the speaker

Prajwal Nagaraja is a Machine Learning Engineer at BMW, specializing in generative AI (GenAI) applications. Prajwal focuses on developing AI solutions that leverage the power of AI to solve real-world problems.

Transcript summary

Speaker Introduction

Prajwal holds a Master's degree in Data Science from Frankfurt University and has professional experience working at FEV Automotive as a data scientist and at Bosch focusing on cloud technologies. For the past three years, he has been working at BMW as a machine learning engineer with a focus on generative AI and data science. He was invited to speak at Munich Datageeks by a colleague who recommended it as an excellent venue for discussing emerging topics and receiving feedback from the data science and AI community.

The Evolution of Generative AI and the Need for Standardization

The presentation begins by tracing the development of generative AI, which started around 2019. While ChatGPT created significant excitement with its ability to write code and generate text, it initially lacked the maturity needed for industrial applications and real-world use cases. Over time, the technology evolved to support tools - mechanisms that allow AI systems to perform specific functions like calculations through calculator tools or database interactions through structured access methods. This addition of tools made AI systems more structured and better suited for practical applications across different use cases.

However, as the AI ecosystem expanded rapidly, a critical challenge emerged: the lack of standardization. Different AI services and developers each implemented their own approaches, creating fragmentation in the industry. For example, when building a RAG (Retrieval-Augmented Generation) database with question-answering chatbot capabilities, developers face numerous choices among vector database options and various language models including ChatGPT GPT-4o and AWS Bedrock models. Without a standard protocol, selecting the optimal service from this vast array becomes problematic, and every AI developer tends to believe their approach is superior.

Introducing MCP: Model Context Protocol

MCP emerges as the missing standard for AI integration, designed to enable collaboration among multiple AI developers through a common language. The protocol addresses several key challenges including reducing application complexity, minimizing the variety of authentication methods, establishing standard procedures for security concerns, and facilitating API integration into various use cases.

The analogy drawn is that just as web development has REST API as its standard protocol, MCP now serves this role for AI applications. It enables code reusability and consistency across the community - if one AI developer in a company creates a database chat service and another developer at a different company builds similar functionality, using MCP ensures both speak the same language and follow similar code standards, promoting reusability throughout the community.

MCP Server Architecture and Components

An MCP server follows a client-server architecture where the client represents the user interface layer. Clients can take various forms including Claude Desktop (an open-source option), Visual Studio Code, GitHub integrations, or web applications built with Streamlit, Django, or Flask.

Every MCP server must include three core components by default:

  1. Tools: These define the actions the server can perform, such as retrieving and searching PDF documents, sending emails through Outlook, or executing database operations. An MCP server can contain a single tool or multiple tools stacked together depending on requirements.
  2. Resources: These represent the data that the server processes and what the client function can access within the MCP server, such as database connections or document repositories.
  3. Prompts: These are specific instructions tailored to the MCP server and its tools. Well-crafted prompts are crucial because they significantly impact accuracy. The speaker emphasizes that prompt engineering has become one of the most valuable skills in AI development - an AI agent might require eight hours of development work, but if the prompts are poorly designed, it will fail. Conversely, someone who simply fixes the prompts can make the system work effectively and receive credit for the success.

The core architecture operates through a protocol layer that automatically converts requests into JSON-RPC 2.0 format, facilitating communication between the server (containing resources, tools, and prompts) and the client.

MCP Client Configuration

The client side offers significant flexibility in configuration. For chatbot applications, clients typically require session history management to maintain conversation context. Clients can be configured to connect to one or multiple MCP servers depending on requirements, and must handle connection issues appropriately. While the client layer is highly customizable and changeable based on specific needs, the MCP server maintains standardization as it follows established protocols for tools, resources, and prompts.

MCP Transport Protocol Evolution

MCP supports different transport mechanisms that have evolved over time. The initial implementation used STDIO (standard input/output), which is particularly useful for local development practices. For production deployments requiring HTTP, the complexity increases as it involves streaming options and cloud deployment, which introduces associated costs. However, MCP also provides alternative communication protocols suitable for showcasing or testing within the open-source community, offering flexibility in implementation approaches.

Growing MCP Ecosystem and Ease of Setup

The MCP ecosystem has experienced rapid growth, with the open-source community contributing over 50 to 100+ MCP servers. A comprehensive list of reference servers is available, covering services like Google Maps and PostgreSQL databases. Integration is straightforward - developers can download Docker images or MCP servers from open-source repositories and integrate them into development environments such as VS Code or JetBrains IDEs, with GitHub integration also supported.

The ease of implementation represents a significant advantage. Developing a database chat functionality from scratch typically requires months or quarters of planning and development. However, when an AI developer has already published an MCP server for this purpose, others can simply download the Docker image, add a JSON configuration file to their IDE, and immediately use the SQL server functionality. This reusability stems from everyone using the same standardized language.

Building Custom MCP Servers

While many publicly available MCP servers exist, organizations often cannot use them due to compliance and governance requirements. Taking external MCP servers and deploying them in corporate environments can create compliance risks. In such cases, organizations build their own MCP servers.

When creating custom MCP servers, developers use specific decorators before defining tools. This standardization means all developers follow the same protocol rather than each creating their own Python function structure. These decorators enable anyone using the MCP server to retrieve all defined tools. Additionally, tool handlers are implemented to manage errors gracefully and provide appropriate responses when something goes wrong.

The tools are defined with descriptions, and there is a structured way to call the tool server. For prompts, which are another crucial component, MCP servers can contain multiple prompts for different tools. These are organized in a centralized location using a list prompts structure, with multiple prompts for multiple tools stacked together. Developers can call specific prompts as needed through provided decorator methods and functions, creating a structured approach to prompt management.

Deploying MCP Servers to Cloud Providers

Once an MCP server is developed with tools, prompts, and resources, it can be deployed to cloud platforms.

AWS Deployment: The MCP server is dockerized and deployed to services such as ECS, EKS, Lambda, or Fargate for serverless implementations. The architecture includes a load balancer that handles HTTPS traffic from users hitting the API. Route 53 can be added to provide a specific DNS name for the MCP server. Both the client and server can run on EC2 instances sharing a common DNS, with users accessing the system through Streamlit or other UI frameworks like Angular. In the background, the MCP server contacts various APIs and data sources.

Azure Deployment: The process is similar but uses Azure-specific services. The API Gateway is replaced with API Management. The MCP server is dockerized and deployed to Azure Functions or dynamic apps. The server is then exposed as an API to different UI use cases including VS Code and other interfaces.

Security: Authentication is critical for MCP servers. Organizations typically implement Microsoft Entra ID or other authentication mechanisms aligned with their industry standards to establish a security layer on top of published MCP servers.

Cloud providers offer deployment advantages by providing ready-to-use GitHub deployment configurations, minimizing the effort required for cloud deployment.

MCP in Action: Tableau Example

A practical example demonstrates MCP functionality using Claude Desktop as an open-source client and a Tableau MCP server containing multiple tools and specific prompts. When a user asks a question through the client interface, the client determines which tools to use. Tool one communicates back to the client indicating which tool should be used for the particular user query and returns a response. If the client determines it needs additional information about the Tableau data, it automatically contacts another tool to gather more details. This back-and-forth interaction continues until the user receives an accurate answer, demonstrating how MCP enables the construction of agentic AI systems.

Introduction to A2A: Agent-to-Agent Protocol

While MCP was developed by Anthropic, Google introduced another protocol called Agent-to-Agent (A2A). Though it appears similar to MCP, it serves a different purpose. The key distinction is that MCP focuses on creating standardized servers, while A2A concentrates on creating and connecting agents.

An agent represents an enhanced LLM model with additional capabilities. For example, ChatGPT as a basic LLM becomes an agent when given powers such as calculating distances between Earth and Mars or between Munich and Frankfurt. Different agents can handle different specialized tasks - one might focus on location services while another specializes in Google Maps integration. Multiple agents with various capabilities need to communicate with each other, and A2A protocol facilitates this inter-agent communication.

Agents can be built using different frameworks and providers: LangGraph (an open-source tool), Llama using CrewAI, or other LLM providers with different parameters. The advantage of A2A protocol is that all these differently-built agents can communicate using the same A2A language.

A2A Agent Discovery and Communication

In an A2A system, agent communication is managed through an orchestration agent that performs agent discovery. Every agent defines its capabilities in an agent.json file - essentially a CV for each agent. The supervisor agent collects these agent.json files from all available agents to understand their skill sets and capabilities. This single JSON file enables multiple agents to communicate with each other, increasing the robustness of AI applications.

A2A Use Case: Employee Onboarding

A practical example illustrates A2A functionality through an employee onboarding scenario. The system includes an orchestration agent and four specialized agents: HR agent, IT agent, Facility agent, and Hiring Manager agent. The orchestration agent handles the onboarding process for a new company employee. Using agent.json files, the orchestration agent reads what each specialized agent does - the capabilities of the HR agent, IT agent, and Facility agent. Based on user queries, it communicates between different agents to complete various tasks. Once all tasks are finished, the system reports back to the Hiring Manager agent that the new user has been successfully onboarded. This demonstrates how multiple agents with different skill sets can collaborate to accomplish complex tasks for specific use cases.

Integrating MCP and A2A

Rather than viewing MCP and A2A as competing technologies, they can be integrated to create more powerful systems. In generative AI applications, using multiple agents is typically necessary because single agents have limited problem-solving capabilities. The integration approach involves attaching each agent to an MCP server - one agent connects to one MCP server with its specific client, while another agent connects to a different MCP server. These agents then communicate using the A2A protocol.

For example, one agent might connect to a Redshift database through its MCP server, while another agent connects to an Oracle database through a different MCP server. The two agents, each connected to their respective MCP servers, communicate with each other via A2A protocol. This combination enables extensive functionality by leveraging the advantages of both protocols. Rather than debating which protocol is superior, using both together provides complementary benefits, as each has distinct advantages and disadvantages. The combined approach creates more robust and capable AI applications.