At RSAC 2026 in late March, the Cloud Security Alliance took its AI Safety Initiative out of the parent body and stood it up as an independent 501(c)(3): the CSAI Foundation. The mission statement is deliberately narrow — “Securing the Agentic Control Plane” — and signals a shift in scope from the defense of individual models toward governance of the autonomous agents those models are now spawning.
From initiative to institution
CSAI inherits the output of roughly two years of work: the AI Controls Matrix (AICM), the STAR for AI organizational certification, the Trusted AI Safety Expert (TAISE) credential, and a library of more than thirty research white papers. By treating that output as the seed of a standing institution rather than an ongoing working group, CSA is repeating the move it made for cloud security governance fifteen years ago — turning a body of practice into a standards-and-certification anchor.
Per the CSA press release, CEO and co-founder Jim Reavis framed the change as operational: “The agentic era demands a new kind of security infrastructure — one that governs not just what AI models can do.” Endorsements at launch included Phil Venables (Ballistic Ventures; formerly Google Cloud CISO), Jen Easterly (RSAC CEO; formerly CISA Director), Cloudflare Chief Strategy Officer Stephanie Cohen, and Cisco’s Omar Santos on behalf of the CoSAI Project governing board.
Six programs, one agenda
CSAI’s launch documentation lays out six programs:
- AI Risk Observatory — telemetry across OpenClaw and MCP-server ecosystems, plus a CVE Numbering Authority scoped to agentic AI. This is the observability layer for agent behavior that, as of today, largely does not exist.
- Agentic Best Practices — identity-first controls for non-human actors, runtime authorization, privilege governance for autonomous workloads, and a taxonomy standard for agent-to-agent interactions.
- Education, Credentialing & Awareness — TAISE is being expanded into three tracks: CxO for executives, Agentic for practitioners, and Compass for high-school students. The Compass track connects to the White House Task Force for AI Education.
- CxOtrust for Agentic AI — private roundtables for CISOs, CIOs, and the emerging CAIO role, with monthly briefings and board-ready risk narratives.
- Global Assurance & Trust — expansion of STAR for AI, grounded in the AICM plus ISO 42001, ISO 27001, and SOC 2, with continuous-audit tooling CSA calls Valid-AI-ted.
- Future Forward Initiatives — including a sandbox named CSA Pod for live agent interaction, a TAISE-Agent Certification that extends professional credentials to autonomous agents themselves, and a Catastrophic Risk Annex tracking frontier-model failure modes.
The second-to-last of those is the one worth flagging. A credential attached to an agent — rather than to the human running it — reframes the procurement question. The buyer stops asking “does this product have a SOC 2” and starts asking “is this agent certified to act on behalf of a principal at privilege level X?” That is a different purchasing model, and it implies a different liability conversation.
What this means for the forum audience
For practitioners building or buying agent platforms, two artifacts will land in procurement checklists first: the STAR for AI certification, because enterprise buyers already read CSA’s STAR registry the way they read SOC 2, and the AI Risk Observatory’s CVE Numbering Authority, because there is currently no consistent mechanism to disclose or triage an agent-specific vulnerability. A recognized numbering authority forces that discipline on both vendors and researchers.
For CISOs and CAIOs, CxOtrust is the low-effort on-ramp — a monthly-briefing and private-roundtable format that mirrors what CSA has run for cloud CISOs since 2009. The foundation’s site is at csai.foundation, and the next public milestone is the expanded TAISE syllabus release expected later in Q2 2026.