Discovery
All entries

Collection · 6 entries

Security tools for AI coding agents

Sandboxes, scanners, proxies, and governance toolkits that keep autonomous agents from doing damage.

The 'agent security' problem is really three problems stacked: input (prompt injection, untrusted data crossing the agent loop), execution (what the agent can run, where, with what permissions), and output (data leaving the system through tool calls). The tools below tackle different layers - Destructive Command Guard and Zerobox at execution, AgentShield and the Microsoft Governance Toolkit at config and policy, CrabTrap and LLM-Anonymization at the output / network boundary.

Frequently asked

Where should I start with agent security?

If you're running a coding agent locally, Destructive Command Guard or Zerobox give you immediate execution-layer guardrails. If you're shipping an agent product, look at CrabTrap (LLM-as-a-judge proxy) and the Microsoft Agent Governance Toolkit for the policy layer.

How do these compare to OWASP's Agentic Top 10?

The Microsoft Agent Governance Toolkit explicitly maps to all 10 OWASP Agentic categories. Most other tools here cover specific risks - sandboxing for excessive agency, anonymization for sensitive data exposure, MCP scanners for supply chain compromise.

Related collections