Enterprise AI agents are shipping faster than governance frameworks can keep up. Teams deploy LLM-powered workflows that draft responses, classify documents, sync integrations, and evaluate retrieval quality, but the governance layer around these agents has been largely ad-hoc until now.
In February 2026, the Cloud Security Alliance published the Agentic Trust Framework (ATF), the first industry-standard Zero Trust governance specification for AI agents. ATF defines 25 requirements across five elements: identity management, behavioral monitoring, data governance, segmentation, and incident response.
We implemented all 25.
The Problem ATF Addresses
Traditional AI governance focuses on model-level concerns: training data provenance, bias detection, model cards, and regulatory compliance. These are important but incomplete. They answer "is this model safe to deploy?" without addressing "is this agent behaving correctly in production right now?"
Agent governance fills the gap. When an AI agent autonomously drafts security questionnaire responses, syncs compliance data, or evaluates retrieval quality, the governance layer needs to answer questions that model governance cannot:
- Identity: Which agent performed this action? Can we prove it cryptographically?
- Behavior: Is this agent operating within its established behavioral pattern?
- Data: Did the agent access only the data it was authorized to use?
- Boundaries: Was the agent constrained to its permitted action set?
- Incidents: What happens when the agent fails repeatedly?
ATF provides the structure for answering all five.
What VeriRFP Already Had
Before ATF was published, VeriRFP already covered 16 of the 25 requirements through its existing governance stack:
- HMAC-SHA256 signed audit chains for tamper-evident agent execution records
- Ed25519 signed compliance heartbeats for cryptographic system integrity verification
- OPA policy engine evaluating governance decisions across 22+ action types
- OpenFGA authorization for relationship-based access control
- Multi-stage LLM security pipeline with prompt injection scanning and PII masking
- Evidence graph with OSCAL control mappings and framework crosswalks for data lineage
This was built for operational security rather than ATF compliance, but the overlap was substantial.
The Nine Gaps We Closed
ATF exposed nine requirements that our existing stack did not fully satisfy:
Agent Capability Manifests (I-4, I-5)
ATF requires every agent to publish a machine-readable declaration of its purpose, permitted actions, and data access scope. We had agent types enumerated in code but no formal manifest schema.
We created typed capability manifests for all 19 governed agent types. Each manifest declares the agent's purpose, permitted actions, data it reads and writes, external services it contacts, default autonomy level, and maximum impact scope.
Behavioral Baselines and Anomaly Detection (B-3, B-4)
ATF requires behavioral baselines computed from at least 7 days or 1,000 operations, with anomaly detection triggering alerts within 60 seconds.
We built a rolling baseline computation engine that tracks average duration, standard deviation, error rate, request frequency, and p95 latency per agent type per workspace. A worker processor runs hourly, computing baselines from the existing audit log and detecting statistical deviations greater than 2 sigma from established patterns.
Circuit Breaker and Kill Switch (R-1, R-2)
ATF requires automated agent halting on repeated failures and instant manual termination capability.
We implemented Redis-backed circuit breakers that track consecutive failures per agent per workspace. Three consecutive failures trip the circuit to "open" state, blocking execution until a configurable cooldown period expires. A half-open state allows one probe execution before closing.
The workspace-level kill switch immediately blocks all agent execution within one Redis TTL cycle. Activation requires admin role and is logged as a critical governance event.
Autonomy Levels and Graceful Degradation (R-5, S-5)
ATF defines a four-level maturity model for agent autonomy. We implemented this as:
- Level 1 (Intern): Read-only. Observe and report. Human approves all actions.
- Level 2 (Junior): Recommend actions. Human approves before execution.
- Level 3 (Senior): Act autonomously with guardrails. Notify after action.
- Level 4 (Principal): Fully autonomous within domain.
When a circuit breaker trips, the affected agent is automatically demoted to Level 1. When the kill switch activates, all workspace agents are demoted to Level 1. Promotion back to higher levels requires passing five assessment gates covering performance, security, business value, incident record, and governance signoff.
What This Means for Customers
For security and compliance teams evaluating VeriRFP, ATF conformance provides three concrete things:
Auditable governance. The conformance matrix maps each ATF requirement to a specific implementation. Security reviewers can trace each claim to actual system behavior rather than marketing assertions.
Runtime enforcement. This is not a compliance checkbox. Circuit breakers halt agents that fail. Anomaly detection alerts within 60 seconds. Kill switches take effect within one second. The governance layer actively protects workspace data.
Continuous evaluation. The AI Governance Dashboard, available on Business and Enterprise plans, provides real-time visibility into agent health, quality trends, anomaly alerts, and ATF compliance status. Every agent is scored against golden evaluation benchmarks across response accuracy, evidence citation, hallucination detection, and five additional criteria.
VeriRFP is the first RFP and trust operations platform with standards-aligned AI agent governance. We published this not as a marketing differentiator but as an operational necessity. When AI agents act on behalf of your organization, the governance around them should be as rigorous as the security around your data.
The full ATF conformance matrix is available at verirfp.com/ai-governance.