Standards Engagement¶
AI Trust Commons engages directly with the standards bodies defining AI agent governance, contributing practitioner experience to shape policy.
-
NIST
Active participant in the AI Agent Standards Initiative. Public comment submitted to the CAISI RFI on AI Agent Security (DOI: 10.5281/zenodo.18903117). Listening session request submitted.
NCCoE Identity and Authorization concept paper in preparation, addressing how AI agents authenticate and authorize across provider boundaries. Deadline: April 2, 2026.
-
OWASP
Contributing to the MCP Top 10 project and the Agentic Security Initiative, the benchmark framework for autonomous AI security.
The OWASP Top 10 for Agentic Applications identifies the most critical security risks for autonomous AI systems. AI Trust Commons maps these risks to technical controls and provides automated validation tooling.
-
EU AI Act
Article 50 transparency compliance tooling for high-risk AI systems. Compliance deadline: August 2026.
The EU AI Act imposes specific transparency and documentation requirements on AI systems operating in the European Union. AI Trust Commons provides tooling that helps organizations generate the required documentation from their existing governance controls.
Public Record¶
| Submission | Channel | Reference |
|---|---|---|
| NIST RFI on AI Agent Governance (~5,000 words) | regulations.gov | DOI: 10.5281/zenodo.18903117 |
| NCCoE Identity and Authorization concept paper | AI-Identity@nist.gov | In preparation (April 2, 2026) |