Microsoft has quietly introduced the Agent Governance Toolkit, an open source project designed to monitor and control AI agents during execution as enterprises try, and move them into production workflows.
The toolkit, which is a response to the Open Worldwide Application Security Project’s (OWASP) emerging focus on AI and LLM security risks, adds a runtime security layer that enforces policies to mitigate issues such as prompt injection, and improves visibility into agent behavior across complex, multi-step workflows, Imran Siddique, principal group engineering manager at Microsoft wrote in a blog post.
More specifically, the toolkit maps to OWASP’s top 10 risks for agentic systems, including goal hijacking, tool misuse, identity abuse, supply chain risks, code execution, memory poisoning, insecure communications, cascading failures, human-agent trust exploitation, and rogue agents.
The rationale behind the toolkit, Siddique wrote, stems from how AI systems increasingly resemble loosely governed distributed environments, where multiple untrusted components share resources, make decisions, and interact externally with minimal oversight.
That prompted Microsoft to apply proven design patterns from operating systems, service meshes, and site reliability engineering to bring structure, isolation, and control to these environments, Siddique added.
The result was the Redmond-headquartered giant packaging these principles into the toolkit comprising seven components available in Python, TypeScript, Rust, Go, and .NET.
The cross language approach, Siddique explained, is aimed at meeting developers where they are and enabling integration across heterogeneous enterprise stacks.
As for the components, the toolkit includes modules such as a policy enforcement layer named Agent OS, a secure communication and identity framework named Agent Mesh, an execution control environment named Agent Runtime, and additional components, such as Agent SRE, Agent Compliance, and Agent Lightning, covering reliability, compliance, marketplace governance, and reinforcement learning oversight.
Beyond its modular design, Siddique further wrote that the toolkit is built to work with existing development ecosystems: “We designed the toolkit to be framework-agnostic from day one. Each integration hooks into a framework’s native extension points, LangChain’s callback handlers, CrewAI’s task decorators, Google ADK’s plugin system, Microsoft Agent Framework’s middleware pipeline, so adding governance doesn’t require rewriting agent code.”
This approach, the senior executive explained, would reduce integration overhead and risk, allowing developers to introduce governance controls into production systems without disrupting existing workflows or incurring the cost and complexity of rearchitecting applications.
Siddique even went on to give examples of several framework integrations that are already deployed in production workloads, including LlamaIndex’s TrustedAgentWorker integration.
For those wishing to explore the toolkit, which is currently in public preview, it is available under an MIT license and structured as a monorepo with independently installable components.
Microsoft, in the future, plans to transition the project to a foundation-led model and is already engaging with the OWASP agentic AI community to support broader governance and stewardship, Siddique wrote.
Go to Source
Author: