The Model Context Protocol (or simply MCP) is a protocol developed by Anthropic that aims to standardize access to external tools used by AI applications.
Imagine you are developing a system of AI Agents that need to access information from the internet. With the MCP protocol, there’s no need to create or adapt how these Agents access the data, regardless of the framework used. Since it’s a standardized approach, you simply add this functionality to the capabilities of the Agents you’re building.
Anthropic itself uses the analogy that MCP works like a USB port: you can access the functionalities of the “device” (in this case, the external tool) in a consistent way, regardless of the system you’re working on.
In this post, I’ll cover:
- The Architecture of MCP
- Where to find Available MCP Servers?
- MCP Transport Layers
- Practice 1: Using MCP with LangChain
- Practice 2: Using MCP with CrewAI
- Practice 3: Using MCP with Hugging Face
- Conclusion
The Architecture of MCP
MCP is a client-server architecture. In this model, the client makes a request to the server, which processes the request and returns a response related to what was requested.

The idea is that your AI Agent accesses the functionalities provided by the MCP server through the MCP client, as illustrated in the image below.

The MCP client is responsible for:
- Discovering the capabilities of servers;
- Receiving responses from servers;
- Managing the execution of tools.
MCP servers can provide:
- Prompt templates;
- Resources such as data files, entire file systems, traditional databases, and more;
- Tools, which can be any Python function, like API access, image processing, among others.
In an AI application that uses MCP, you usually won’t need to develop the client, as it’s typically provided by the frameworks themselves. However, even with many MCP servers already available, it’s quite likely that you’ll need to develop your own server, depending on the specific needs of your application. This can be done easily using FastMCP, which I’ll cover in the next post.
Where to find Available MCP Servers?
On the Smithery website, you can find several available MCP servers, such as those for Serper, GitHub, Slack, and many others.
I’ll show an example of how to get the Serper MCP server, which is responsible for performing internet searches. After creating your account, type “serper” in the search bar and you’ll be taken to the page shown below.

Next, click on the JSON menu to obtain the file that will enable access to this server. Note that to access the JSON, you’ll need the Serper API key, which you can get by creating an account through this link — some free credits are provided.

In this post, I will use this server so that Agents can search for information on the internet. Therefore, when we reach this section, I will present the JSON obtained by following these steps.
MCP Transport Layers
When using MCP servers, it is necessary to define the transport, which can be SSE (Server-Sent Events) or Stdio (Standard Input/Output).
SSE is based on HTTP and ideal for web applications and distributed systems that require continuous, real-time updates. Stdio, on the other hand, is used for local processing, enabling communication between processes on the same device without the need for a network or more complex protocols.
Practice 1: Using MCP with LangChain
The use case covered was an internet search performed by the Agent on the topic Agent2Agent. For this, the Serper MCP server was used (explained how to obtain it in the “Where to find Available MCP Servers?” section). Below are the Python version used and the requirements.txt file for download.
First, the MCP client (mcp_client.py) was created, which will be responsible for connecting to the Serper MCP server. The code used for this connection is shown below.
from langchain_mcp_adapters.client import MultiServerMCPClient
from dotenv import load_dotenv
import os
load_dotenv()
SMITHERY_KEY = os.getenv("SMITHERY_KEY")
SMITHERY_PROFILE = os.getenv("SMITHERY_PROFILE")
def get_mcp_client():
client = MultiServerMCPClient({
"mcp-server-serper": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@marcopesani/mcp-server-serper",
"--key",
SMITHERY_KEY,
"--profile",
SMITHERY_PROFILE
],
"transport": "stdio"
}
})
return client
Note that LangChain provides a specific package for this functionality, called MultiServerMCPClient, which allows connection to MCP servers defined in a JSON-formatted dictionary. This JSON can be obtained directly from the Smithery website, as mentioned in earlier sections.
It’s worth noting that, in this use case, the transport used was stdio, which is suitable for local execution. Additionally, the environment variables SMITHERY_KEY and SMITHERY_PROFILE represent, respectively, the key and profile provided by Smithery, required for authentication and remote server execution via npx.
Next, I created a file called researcher_agent.py, which is responsible for defining the Agent — the Researcher.
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
class Researcher:
def __init__(self, tools):
self.__tools = tools
def create_agent(self):
prompt = (
"Conduct an internet search on the topic provided by the user. Based on the most relevant, reliable, and up-to-date information found:\n"
"- Provide a clear and concise summary of the topic.\n"
"- Use bullet points when possible and helpful to improve readability and organization.\n"
"- If there are public discussions, controversies, or differing viewpoints, briefly mention them.\n"
"- If available, highlight data, statistics, or studies that help clarify the subject.\n"
"- Maintain objective, accessible, and well-structured language.\n"
"- At the end, list the most relevant URLs used or found during the research."
)
model = ChatOpenAI(model="gpt-4o-mini")
agent = create_react_agent(
model=model,
tools=self.__tools,
prompt=SystemMessage(content=prompt)
)
return agent
Note that the tools are passed in the class constructor and, in this use case, they are the tools from the Serper MCP Server. It’s also worth highlighting that the model used was OpenAI’s gpt-4o-mini.
Finally, below is the main.py file, responsible for running the workflow.
import asyncio
from mcp_client import get_mcp_client
from researcher_agent import Researcher
from dotenv import load_dotenv
load_dotenv()
async def main():
client = get_mcp_client()
tools = await client.get_tools()
researcher = Researcher(tools)
agent = researcher.create_agent()
resp = await agent.ainvoke({"messages": "What is the Agent2Agent protocol?"})
print(resp['messages'][-1].content)
if __name__ == "__main__":
asyncio.run(main())
After instantiating the MCP client, the Serper tools are obtained through the get_tools() method. These tools are then passed to the Agent’s constructor. Next, the Agent is executed using the asynchronous ainvoke() method, which takes as input the question “What is the Agent2Agent protocol?”. It’s worth noting that the main() function is asynchronous to allow the use of await calls in the code.
Below are the generated results. On the left is the Agent’s response without using the tools from the Serper MCP server, in other words, I modified the researcher_agent.py file from tools=self.__tools to tools=[]. On the right is the Agent’s response using the tools, and consequently, with internet access.
Result without Tool:
Click the arrow to expand 🔽
The Agent2Agent protocol is a communication framework designed for enabling decentralized and autonomous interaction among software agents or systems, particularly in environments where traditional centralized communication methods may not be effective. Here’s a structured summary of the protocol:
Overview of Agent2Agent Protocol
- Purpose: Facilitates direct communication between agents in a decentralized manner.
- Architecture: Built to support various types of agents, typically in multi-agent systems (MAS).
- Autonomy: Each agent operates independently, making its own decisions based on the information received and its internal logic.
Key Features
- Interoperability: Allows diverse agents (possibly built with different technologies) to communicate seamlessly.
- Scalability: Designed to effectively handle increased numbers of agents without significant loss in performance.
- Flexibility: Adaptable to a variety of applications, from robotics to IoT (Internet of Things) environments.
Potential Use Cases
- Smart City Initiatives: Enabling communication between different sensors and systems to improve urban living.
- Supply Chain Management: Facilitating real-time interaction among logistics, inventory, and delivery agents.
- Collaborative Learning: Agents can share and improve learning data without a central authority.
Public Discussions and Controversies
- Security Concerns: As with any decentralized system, there are concerns regarding data integrity and privacy.
- Standardization Issues: Different implementations of the protocol may lead to fragmentation, making it difficult for agents to communicate effectively across different platforms.
Data and Studies
- Although specific studies analyzing the Agent2Agent protocol were not found, the broader field of multi-agent systems has seen extensive research and academic interest. For instance, a number of studies discuss:
- Performance metrics for agent communication.
- Applications that have utilized similar protocols successfully in real-world environments.
Conclusion
The Agent2Agent protocol represents a significant advancement in the field of decentralized communications among autonomous systems, enhancing capabilities for scalability, flexibility, and interoperability. However, ongoing discussions about security and standardization highlight the challenges this protocol might face going forward.
References
As of my last training and updates until October 2023, specific URLs for detailed information were unavailable. However, here are suggested search terms to use:
- “Agent2Agent protocol multi-agent systems”
- “Decentralized communication protocols”
- “Autonomous agents communication frameworks”
For the latest information, consider checking reputable sources in AI and robotics journals, tech conference papers, or relevant databases.
Result with Tool:
Click the arrow to expand 🔽
The Agent2Agent (A2A) protocol is a newly introduced open standard, designed to facilitate seamless interoperability and collaboration among autonomous AI agents across various platforms and frameworks. It was launched by Google with contributions from over 50 technology partners and is managed under the governance of the Linux Foundation.
Key Features and Objectives of the A2A Protocol:
-
Interoperability: A2A allows different AI agents, regardless of their underlying technologies, to communicate, collaborate, and coordinate actions. This fosters a multi-agent ecosystem where agents can work together to enhance automation and productivity.
-
Secure Communication: The protocol emphasizes security with enterprise-grade authentication and authorization, ensuring that communications between agents are trusted and secure.
-
Universal Integration: Built on existing standards like HTTP, SSE, and JSON-RPC, this protocol is designed for easy integration into existing IT stacks and allows agents built by different vendors to collaborate without friction.
-
Support for Various Modalities: A2A accommodates different types of communication modes, including text, audio, and video, thereby enabling rich and comprehensive interactions between agents.
-
Task Management: Agents can share information about their capabilities, manage tasks effectively, and negotiate responses based on user interface requirements.
-
Collaboration and Discovery: Through an “Agent Card,” agents can advertise their capabilities, allowing others to engage them for specific tasks, thus enhancing functionality through collaboration.
Real-World Applications:
- A practical example includes a hiring process, where a hiring manager’s agent can interact with specialized agents to source and evaluate candidates. This multi-agent collaboration simplifies complex tasks, illustrating the potential of A2A in diverse industries.
Industry Support and Future Prospects:
-
The protocol is backed by major industry players, including Microsoft, Salesforce, SAP, and others, who are integrating A2A into their systems to enhance agent capabilities and promote open standards.
-
A2A is positioned to bridge the gap between diverse AI applications, paving the way for a new era of agentic AI that can operate across organizational and technological boundaries, thereby maximizing productivity and innovation.
Conclusion:
The A2A protocol represents a significant step toward enhancing the interoperability of AI agents, aiming to unlock their full potential in automating complex workflows and improving collaborative work environments.
Relevant Links:
Practice 2: Using MCP with CrewAI
The use case is the same, as is the Python version. Below is the requirements.txt file.
First, the file mcp_servers.py was created, which is similar to the mcp_client.py made for the LangChain version. Note that the JSON information is the same but adapted for the StdioServerParameters package provided by the mcp module. The transport used is also stdio. Additionally, the return is a list of MCP servers.
from mcp import StdioServerParameters
from dotenv import load_dotenv
import os
load_dotenv()
SMITHERY_KEY = os.getenv("SMITHERY_KEY")
SMITHERY_PROFILE = os.getenv("SMITHERY_PROFILE")
def get_mcp_servers():
mcp_serper = StdioServerParameters(
command="npx",
args=[
"-y",
"@smithery/cli@latest",
"run",
"@marcopesani/mcp-server-serper",
"--key",
SMITHERY_KEY,
"--profile",
SMITHERY_PROFILE
]
)
mcp_servers = [mcp_serper]
return mcp_servers
Next, the Agent is created (researcher_agent.py). If you want to learn more about Agents using CrewAI, click here, as my website offers a complete section of posts on this topic. The Agent and Task creation format I used is slightly different from what’s shown below, but the concept remains the same.
from crewai import Agent, Task, Crew, Process
class Researcher:
def __init__(self, tools):
self.__tools = tools
def create_crew(self):
researcher = Agent(
role="Researcher",
goal="Perform thorough online research on any user-provided topic, summarize key information clearly, and provide relevant URLs.",
backstory=(
"You are an experienced internet researcher skilled in quickly finding reliable, up-to-date information, synthesizingdata into "
"clear summaries, and identifying relevant sources."
),
tools=self.__tools,
reasoning=True,
verbose=True
)
research = Task(
description=(
"User input: {user_input}\n\n"
"Conduct an internet search on the topic provided by the user. Based on the most relevant, reliable, and up-to-date information found:\n"
"- Provide a clear and concise summary of the topic.\n"
"- Use bullet points when possible and helpful to improve readability and organization.\n"
"- If there are public discussions, controversies, or differing viewpoints, briefly mention them.\n"
"- If available, highlight data, statistics, or studies that help clarify the subject.\n"
"- Maintain objective, accessible, and well-structured language.\n"
"- At the end, list the most relevant URLs used or found during the research."
),
expected_output=(
"A clear, organized summary of the researched topic including bullet points when appropriate, mentions of any relevant discussions or "
"controversies, important data or studies if available, and a final list of the most relevant URLs."
),
agent=researcher
)
crew = Crew(
agents=[researcher],
tasks=[research],
verbose=True,
process=Process.sequential
)
return crew
Finally, here is the main.py file. Note that CrewAI provides an MCP server adapter (MCPServerAdapter). This makes it possible to access the tools and provide them to the Agent.
from mcp_servers import get_mcp_servers
from crewai_tools import MCPServerAdapter
from researcher_agent import Researcher
from dotenv import load_dotenv
load_dotenv()
def main():
mcp_servers = get_mcp_servers()
with MCPServerAdapter(mcp_servers) as tools:
researcher = Researcher(tools)
crew = researcher.create_crew()
results = crew.kickoff(
inputs={
"user_input":"What is the Agent2Agent protocol?"
}
)
print(results.raw)
if __name__ == "__main__":
main()
As in the previous section, below are the results with and without the tools.
Result without Tool:
Click the arrow to expand 🔽
Agent2Agent Protocol Overview:
- Definition: Agent2Agent (A2A) protocol facilitates communication and interaction between autonomous agents in a multi-agent system. It allows agents to share information, tasks, and resources effectively, optimizing their collaborative efforts.
- Architecture: A2A typically employs a decentralized architecture, meaning each agent operates independently but can communicate and coordinate with others as needed.
- Purpose: The primary goal of A2A is to enhance the efficiency of systems relying on multiple intelligent agents by providing a structured way for them to negotiate, plan, and execute tasks collectively.
Key Features:
- Scalability: A2A protocols can scale to accommodate an increasing number of agents without significant performance degradation.
- Flexibility: They can be adapted to various domains, such as robotics, IoT environments, and distributed software systems.
- Robustness: Support for fault tolerance ensures that the failure of one agent does not compromise the operation of the entire system.
Use Cases:
- Smart Cities: Agents managing traffic, public safety, and resource distribution can communicate to efficiently handle operations in urban environments.
- Supply Chain Management: Different agents can share data regarding inventory, demand, and logistics to optimize overall supply chain performance.
Current Discussions and Controversies:
- Security Concerns: As with any protocol involving autonomous agents, there are discussions about the security measures needed to protect the system from malicious entities and unauthorized access.
- Ethical Considerations: The deployment of autonomous agents raises ethical dilemmas regarding decision-making processes, particularly in sensitive areas like healthcare and law enforcement.
Statistics and Studies:
- Research indicates that systems using A2A protocols can exhibit up to a 30% improvement in task completion times compared to non-cooperative systems due to enhanced collaboration.
- Case studies in automated logistics have demonstrated notable reductions in operational costs when A2A interactions are effectively implemented.
Relevant URLs:
- Agent2Agent Protocol Overview
- Multi-Agent Systems and Security
- A2A in Smart Cities
- Ethical Implications of Autonomous Agents
This summary provides a comprehensive understanding of the Agent2Agent protocol, covering its definition, features, applications, and the ongoing discussions surrounding its implementation.
Result with Tool:
Click the arrow to expand 🔽
Summary of Agent2Agent Protocol (A2A)
The Agent2Agent (A2A) Protocol is an open standard developed by Google to facilitate seamless communication and collaboration between AI agents across various frameworks and platforms.
Key Features
- Interoperability: A2A allows AI agents, even those developed by different vendors or built on different frameworks, to interact and collaborate effectively.
- Data Exchange: Agents can securely exchange information while coordinating actions on enterprise applications.
- Multi-agent Ecosystem: Designed to function in dynamic environments where multiple agents work on different tasks and workflows.
Design Principles
- Agentic Collaboration: Supports agents collaborating in natural, unstructured modalities.
- Existing Standards: Built on recognized standards like HTTP and JSON-RPC to ease integration with existing IT systems.
- Secure Communication: Incorporates enterprise-grade authentication mechanisms.
- Support for Long Tasks: Capable of managing both short tasks and extended processes, providing real-time feedback.
- Modality Agnostic: Supports various types of interactions, including text, audio, and video.
How A2A Works
- Capability Discovery: Agents can advertise their functionalities via an “Agent Card” using JSON.
- Task Management: Tasks are defined with distinct lifecycles, facilitating collaboration and status updates between agents.
- Communication: Agents can send messages that contain artifacts, context, or directions.
Real-World Application: Candidate Sourcing
A hiring manager can instruct their AI agent to find candidates for a job. The agent may interact with specialized agents to gather potential candidates, schedule interviews, and manage tasks like background checks, showcasing effective multi-agent collaboration.
Collaboration and Support
More than 50 tech partners, including companies like Atlassian, Intuit, and SAP, support this initiative, contributing to its development and potential deployment across industries.
Future Perspectives
The A2A protocol aims to revolutionize agent interoperability and is positioned to enhance operational efficiency and innovation in AI systems.
Relevant URLs
- Announcing the Agent2Agent Protocol (A2A) – Google Developers Blog
- Agent2Agent Protocol Documentation – GitHub
- Linux Foundation Launches the Agent2Agent Protocol Project
- Microsoft’s Insights on the Agent2Agent Protocol
- Overview and Specifications of A2A
This protocol is expected to pave the way for a new era where AI agents can effectively collaborate and automate complex workflows across various platforms.
Practice 3: Using MCP with Hugging Face
Hugging Face provides a framework for creating Agents called Smolagents. This framework uses CodeAgents, which are Agents capable of completing Tasks by writing and executing Python code. I’ll go into more detail about this framework in future posts, but if you want to dive deeper, you can check out the official Hugging Face documentation.
As with the other frameworks presented, the use case is the same, as is the Python version. Additionally, the requirements.txt file is provided below.
First, here is the mcp_servers.py file, which is the same as the one used in the CrewAI example.
from mcp import StdioServerParameters
from dotenv import load_dotenv
import os
load_dotenv()
SMITHERY_KEY = os.getenv("SMITHERY_KEY")
SMITHERY_PROFILE = os.getenv("SMITHERY_PROFILE")
def get_mcp_servers():
mcp_serper = StdioServerParameters(
command="npx",
args=[
"-y",
"@smithery/cli@latest",
"run",
"@marcopesani/mcp-server-serper",
"--key",
SMITHERY_KEY,
"--profile",
SMITHERY_PROFILE
]
)
mcp_servers = [mcp_serper]
return mcp_servers
Here is the researcher_agent.py file with the Agent’s definition and execution. Unlike other approaches, the Agent’s execution happens within this class, as Hugging Face’s Smolagent uses a synchronous interface for execution (agent.run()), without support for asynchronous calls like LangChain’s ainvoke(), for example.
from smolagents import CodeAgent, OpenAIServerModel
class Researcher:
def __init__(self, tools):
self.__tools = tools
def run(self, user_input:str):
prompt = (
"Conduct an internet search on the topic provided by the user. Based on the most relevant, reliable, and up-to-date information found:\n"
"- Provide a clear and concise summary of the topic.\n"
"- Use bullet points when possible and helpful to improve readability and organization.\n"
"- If there are public discussions, controversies, or differing viewpoints, briefly mention them.\n"
"- If available, highlight data, statistics, or studies that help clarify the subject.\n"
"- Maintain objective, accessible, and well-structured language.\n"
"- At the end, list the most relevant URLs used or found during the research."
)
model = OpenAIServerModel(
model_id="gpt-4o-mini",
temperature=0.1
)
agent = CodeAgent(
model=model,
max_steps=10,
tools=self.__tools,
verbosity_level=2
)
result = agent.run(
task=prompt,
additional_args={"user_input": user_input},
)
return result
Finally, here is the main.py file, which is similar to the previous approaches. Note that, in the case of Smolagent, the ToolCollection class is the one that supports MCP.
import asyncio
from mcp_servers import get_mcp_servers
from smolagents import ToolCollection
from researcher_agent import Researcher
from dotenv import load_dotenv
load_dotenv()
async def main():
mcp_servers = get_mcp_servers()
with ToolCollection.from_mcp(mcp_servers, trust_remote_code=True) as tool_collection:
tools = [*tool_collection.tools]
researcher = Researcher(tools)
result = researcher.run(user_input="What is the Agent2Agent protocol?")
print(result)
if __name__ == "__main__":
asyncio.run(main())
Below are the results for the versions with and without the tool.
Result without Tool:
Click the arrow to expand 🔽
- The Agent2Agent protocol is a communication framework designed for agents to interact with each other in a decentralized manner.
- It facilitates the exchange of information and commands between autonomous agents, allowing them to collaborate and perform tasks efficiently.
- Key features include:
- Decentralization: No central authority is required, promoting autonomy among agents.
- Interoperability: Agents from different systems can communicate using standardized protocols.
- Scalability: The protocol can handle a growing number of agents without significant performance degradation.
- Potential applications include:
- Smart cities: Agents managing traffic, energy, and public services can communicate to optimize resource usage.
- Supply chain management: Agents can coordinate logistics and inventory management in real-time.
- Controversies or differing viewpoints may arise regarding:
- Security: Concerns about the potential for malicious agents to exploit the protocol.
- Privacy: The implications of data sharing between agents and the potential for misuse.
- Relevant studies or statistics may include:
- Research on the effectiveness of decentralized systems in various applications.
- Case studies demonstrating successful implementations of the Agent2Agent protocol in real-world scenarios.
For further information, consider exploring academic databases, technology blogs, and official documentation related to the Agent2Agent protocol.AA
Result with Tool:
Click the arrow to expand 🔽
Agent2Agent Protocol (A2A) Overview:
- The Agent2Agent (A2A) protocol is designed to enable communication between AI agents, allowing them to securely exchange information and coordinate actions across various platforms.
- It addresses the challenge of interoperability among AI agents developed by different organizations and frameworks.
Key Features:
- Interoperability: Facilitates seamless interaction between diverse AI systems.
- Security: Ensures secure communication between agents, enhancing trust and reliability.
- Open Standard: The protocol is open, promoting widespread adoption and collaboration among developers.
Recent Developments:
- The Linux Foundation has launched a project to support the A2A protocol, emphasizing its role in enabling intelligent communication between AI agents.
- Major tech companies, including Google and Microsoft, are backing the protocol, indicating its significance in the future of AI development.
Public Discussions:
- There is a growing interest in the implications of AI agent interoperability, with discussions around ethical considerations and the potential for misuse of such technologies.
Relevant URLs:
Conclusion
As mentioned and demonstrated through practical examples, the MCP protocol standardizes the way AI Agents access tools, resources, and even prompts. In the examples presented, using 3 different frameworks (LangChain, CrewAI, and Hugging Face), the Serper JSON used was the same — the only difference was how each framework accesses the MCP server.
In the next post about MCP, I’ll show how to create your own server using FastMCP.

Leave a comment