One of the most poweful collaborations between AI and tech giants, Model Context Protocol (MCP) is a standard for connecting AI agents. We need standards like MCP to orchestrate communication between AI agents, AI assistants, LLMs, and other resources. Such standards are also critical for developing more complex agentic workflows.
The MCP protocol enables two key technologies: The MCP server connects AI agents, makes them discoverable, and provides other operational services. The MCP gateway is a reverse proxy that serves as an interface between AI agents, MCP servers, and other services that support the MCP protocol.
Many organizations are utilizing AI agents from top-tier SaaS and security companies while also experimenting with ones from growing startups. Devops teams aim to build trustworthy AI agents while avoiding the risks of rapid deployment. The AI development roadmap will likely require agent-to-agent communication with the help of MCP servers.
Below are five requirements to consider before deploying an MCP server or connecting your AI agents to one.
Requirements for MCP servers
While MCP servers share similarities with other integration technologies, they also have key differences. MCP servers act as a catalog of tools and data for AI agents to use when responding to a prompt or completing a task. They centralize authentication, schemas, error handling, and streaming semantics for processing partial responses. Operational and security teams use MCP servers to monitor activity and respond to security incidents and AI agent performance issues.
The scope and scale of services orchestrated by MCP means teams must define their requirements inside a well-defined IT governance model.
“When using MCP to provide your agents with more tools to get their jobs done, make sure your governance requirements extend to that service,” says Michael Berthold, CEO of KNIME. “Before pointing your agent to an external MCP server, make sure you know and understand how prompts and data are processed, and potentially shared or used for other purposes. Don’t assume a tool that seems to be doing something in isolation isn’t using another AI underneath the hood.”
Also see: Five MCP servers to rule the cloud.
1. Define the MCP server’s scope
MCP servers can play a contextual role in agent-to-agent orchestrations. When an AI agent seeks other AI agents to complete a job, it can query an MCP server to identify potential resources and decide which to interface with. Defining the server’s scope helps shape its problem domain and ownership, as well as its governance, security, and other operational boundaries.
“Design your MCP servers to be narrowly focused, exposing specific and granular tools to your AI agents, instead of trying to be a general-purpose API,” says Simon Margolis, associate CTO of AI and ML at SADA, an Insights Company. “This makes it easier for the AI’s reasoning engine to discover the right tool dynamically and improves the reliability of the actions it takes. An MCP server acts as a smart adapter, translating the AI’s request into the exact command the underlying tool understands.”
“We’ve found that simple, explicit instructions, such as telling the model how to use a vendor’s command-line utility, can outperform a poorly integrated MCP server,” adds Andrew Filev, CEO and founder of Zencoder. “Overloading the model’s context with too many MCP tools can actually degrade performance, confuse the agent, and obscure reasoning paths.”
Creating separate servers for finance, HR, customer support, and IT simplifies creating access rules, monitoring operations for anomalies, and defining lifecycle management policies.
2. Establish integration governance
There are different schools of thought over what resources to connect through an MCP server. For example:
- Gloria Ramchandani, SVP of product at Copado, advises teams to pull data, settings, and context from the MCP server rather than keeping their own copies. “Using the MCP as the single place your agents rely on keeps everything consistent, reduces mistakes, and makes automation smoother as your teams grow,” Ramchandani said.
- James Urquhart, field CTO and developer evangelist at Kamiwaza, recommends against relying on MCP servers for data retrieval. “RAG approaches to incorporating live data into response generation still enable better security and performance than MCP integration.”
- Tun Shwe, AI lead at Lenses, says, “Don’t expose existing web and mobile APIs directly as MCP tools. Whilst it’s a quick way to get started, these APIs tend to be fine-grained with verbose responses; characteristics that are undesirable to AI agents, since they inflate token consumption.”
- Rahul Pradhan, VP of product and strategy of AI and data at Couchbase, advises against treating MCP-connected agents with access to a database as generic, low-risk APIs. He suggests the following instead:
- Treat every tool that can read or write data as highly privileged: Enforce least-privilege roles, segregate access by data sensitivity, and separate read from write paths.
- Design prompts so agents first invoke schema introspection tools to understand scopes, collections, and fields before issuing any operations.
- Constrain agents to vetted, parameterized queries or stored procedures, and log all calls, to reduce the risk of exfiltration, corruption, and compliance failures.
3. Implement security non-negotiables
Many organizations created AI governance policies when they rolled out LLMs, then updated them for AI agents. Deploying MCP servers requires layering on new security non-negotiables related to configuration, deployment, and monitoring.
“Prioritize security because tools exposed by an MCP server can change and may not have the same level of data security an agent expects,” says Ian Beaver, chief data scientist at Verint. “Prompt injection risks exist in both tool responses and user inputs, making tool use the primary vulnerability point for otherwise static foundation models. Therefore, treat all tool use as untrusted sources: Log every tool’s input and output to enable full auditability of agent interactions.”
One critical place to start is defining identity, authentication, and authorization for AI agents. Because AI agents will be discoverable through MCP servers, make sure to be clear and transparent on the scope and entitlements of their capabilities.
“Don’t give AI agents unrestricted access when connecting through MCP,” says Meir Wahnon, co-founder at Descope. “Even though MCP standardizes integrations, many servers still lack proper authentication or use overly broad permissions, leaving systems exposed. Apply the principle of least privilege: Grant narrow scopes, require explicit user consent, and keep humans in the loop for sensitive actions.”
Other security recommendations include isolating high-risk capabilities within dedicated MCP servers or namespaces and implementing cryptographic server verification. Key principles of MCP server security governance include secure communications, data integrity assurance, and incident response integration.
Three more security recommendations:
- Vrajesh Bhavsar, CEO and co-founder of Operant AI, says, “Don’t rely on traditional security approaches that depend on static rules and predefined attack patterns—they cannot keep up with the dynamic, autonomous nature of MCP-connected systems.”
- Arash Nourian, global head of AI at Postman, adds, “Don’t treat MCP as secure out of the box because it currently has close to zero built-in security, with no standardized authentication, weak session management, and unvetted tool registries that open the door to MCP-specific attacks like prompt or tool poisoning.”
- Or Vardi, technical lead at Apiiro, adds, “Keep humans in the loop for any sensitive or business-critical tasks, and also monitor and audit MCP activity to detect misuse early.”
4. Don’t delegate data responsibilities to MCP servers
Several experts cautioned that while MCP servers provide connectivity, they do not vet the data passing through them.
“Don’t assume MCP solves your underlying data quality problems,” says Sonny Patel, chief product and technology officer at Socotra. “MCP provides the connectivity layer, but AI agents can only be as effective as the data they access. If your systems contain incomplete, inconsistent, or siloed information, even perfectly connected agents will produce unreliable results.”
Developers should also scrutinize prompts and other inputs sent to their AI agents via MCP servers and make no assumptions about upstream validation.
“Always implement runtime interception to validate MCP inputs before they reach your agent’s reasoning engine, says Matthew Barker, head of AI research and development at Trustwise. “Attackers can poison tool descriptions, API responses, or shared context with hidden commands that hijack agent behavior. It only takes one compromised agent to cascade malicious instructions across your entire AI ecosystem through inter-agent communication.”
Pranava Adduri, CTO and co-founder, Bedrock Data, says, “Don’t connect AI agents to data sources via MCP without first classifying data and establishing access boundaries. MCP simplifies context sharing but can amplify risk if agents query sensitive or unverified sources.”
5. Manage the end-to-end agent experience
As organizations deploy more AI agents and configure MCP servers, experts suggest setting principles around end-user and operational experiences. Devops teams and SREs will want to ensure they have observability and monitoring tools in place to alert on issues and aid in diagnosing them.
Or Oxenberg, senior full-stack data scientist at Lasso Security, says to establish comprehensive observability with trusted MCP servers. “If you’re using an MCP gateway, remember it monitors only traffic going in and out of the MCP server. For full visibility, capture every interaction and user input, map and monitor the agent’s planning and actions, and track their tasks and decisions. Without this foundation, you can’t detect when agents drift from intended behavior or trace back security incidents.”
Developers should also limit an AI agent’s access to MCP servers and AI agents, granting access to only those providing relevant services. Broadening their access can lead to erroneous results and higher costs.
“As an integrator, you are now crafting a product experience for the agent persona and should treat the modulated toolkit with the same product discipline you apply to the developer UX: clarity, alignment, and value,” says Edgar Kussberg, group product manager of AI at Sonar. “When agents are given broad or generic MCP tools, they spend too much time and tokens exploring, filtering, reasoning, and failing to provide value, wasting budget, complicating review workflows, and diluting trust in agent outputs.”
As more organizations deploy AI agents into production, I expect a growing need to configure MCP servers to support agent-to-agent communication. Establishing an upfront strategy, nonfunctional requirements, and security non-negotiables should guide smarter and safer deployments.
Go to Source
Author: