Who Is Shaping the Global Standards for AI Governance in Cybersecurity?
In 2025, global standards for AI governance in cybersecurity are being shaped not by one entity, but by a multi-stakeholder ecosystem. This includes governmental bodies like the EU (with the AI Act) and US NIST (with the AI RMF), international standards organizations like ISO/IEC, and practitioner-led industry groups like OWASP. This detailed analysis identifies the key players creating the rules for safe and secure AI. It explains how their roles differ, from high-level legislation to specific technical controls, and outlines the challenges of harmonizing these efforts. It provides a CISO's guide to navigating this complex landscape by adopting a multi-framework, risk-based approach.
Table of Contents
- The Architects of Trust: A Multi-Stakeholder Effort
- The Old vs. The New: Applying General Frameworks vs. Creating AI-Specific Governance
- Why This Is the Urgent Governance Question of 2025
- How the Standards Stack Works Together
- Comparative Analysis: The Key Standard-Bearers in AI Cybersecurity
- The Core Challenge: Harmonization and the Pace of Innovation
- The Future of Governance: Dynamic Frameworks and Compliance as Code
- CISO's Guide to Navigating the New Standards Landscape
- Conclusion
- FAQ
The Architects of Trust: A Multi-Stakeholder Effort
In 2025, the global standards for AI governance in cybersecurity are not being shaped by a single entity, but by a complex, multi-stakeholder ecosystem of influential players. The primary architects fall into three distinct categories: governmental and supranational bodies like the European Union and the U.S. National Institute of Standards and Technology (NIST) setting the legal and risk management agenda; international standards organizations like ISO/IEC creating detailed technical specifications; and agile, industry-led consortia like OWASP and the Cloud Security Alliance providing practical, on-the-ground guidance for developers and security professionals.
The Old vs. The New: Applying General Frameworks vs. Creating AI-Specific Governance
The traditional approach to governing technology involved applying general, all-purpose cybersecurity frameworks, such as the NIST Cybersecurity Framework (CSF) or the ISO 27001 standard, to systems that happened to incorporate AI. This approach treated AI as just another piece of software and failed to address its unique failure modes and risks.
The new model of 2025 is one of AI-specific governance. This involves creating and adopting frameworks that are purpose-built to address the unique challenges of AI systems. This includes the NIST AI Risk Management Framework (AI RMF), which focuses on the entire AI lifecycle, and the OWASP Top 10 for Large Language Model Applications, which identifies novel vulnerabilities like prompt injection. The focus has shifted from securing the container to governing the logic and data of the AI model itself.
Why This Is the Urgent Governance Question of 2025
The drive to establish clear standards for AI in cybersecurity has become urgent for several critical reasons.
Driver 1: Rapid Deployment in Critical Systems: AI is no longer experimental. It is being actively deployed in critical infrastructure, financial markets, and for autonomous defense systems. The lack of clear safety and security guardrails for these deployments poses a significant societal risk.
Driver 2: The Weaponization of AI by Threat Actors: As attackers increasingly use AI to power their own campaigns, defenders must use AI in response. This creates a new arms race, and governance is needed to ensure defensive AI is used safely and ethically.
Driver 3: The Need for Market Trust and Interoperability: For businesses to confidently buy and sell AI-powered security products, there must be a common set of standards to measure their safety, reliability, and effectiveness. Standards foster market trust and enable products from different vendors to work together.
How the Standards Stack Works Together
The different governance efforts are not competing so much as they are forming a multi-layered "stack."
1. The Legislative Layer (The "What You Must Do"): At the top, bodies like the European Union with its EU AI Act set legally binding obligations, especially for systems deemed "high-risk." This dictates the legal requirements for market entry.
2. The Framework Layer (The "How to Think About It"): In the middle, frameworks like the NIST AI RMF provide a structured, voluntary process for organizations to think about, identify, measure, and manage their AI-related risks.
3. The Technical Standard Layer (The "Specific Controls"): Below that, formal standards organizations like ISO/IEC create detailed, auditable technical specifications (e.g., ISO/IEC 27090 on AI security) that define the specific controls an organization can implement.
4. The Practical Layer (The "On-the-Ground Reality"): At the foundation, practitioner-led groups like OWASP provide constantly updated lists of the most common, real-world vulnerabilities that developers need to defend against right now.
Comparative Analysis: The Key Standard-Bearers in AI Cybersecurity
This table breaks down the roles of the most influential organizations.
| Organization/Entity | Type | Key Contribution to AI-Cybersecurity Governance | Primary Audience |
|---|---|---|---|
| The European Union (EU AI Act & ENISA) | Governmental/Regulatory | Sets legally binding requirements for "high-risk" AI systems, making security and robustness a legal mandate. | AI Developers, Businesses Operating in the EU. |
| U.S. NIST | Governmental Standards Agency | The AI Risk Management Framework (RMF), a widely adopted voluntary framework for identifying, managing, and governing AI risks. | All Organizations Developing or Using AI. |
| ISO/IEC JTC 1/SC 42 | International Standards Body | Develops formal, detailed technical standards for AI security (ISO/IEC 27090) and privacy (ISO/IEC 27091) for certification. | Engineers, Compliance Officers, Auditors. |
| OWASP | Industry Non-Profit Consortium | The OWASP Top 10 for Large Language Model Applications, the de-facto, practitioner-focused list of critical LLM vulnerabilities. | Developers, Application Security Professionals. |
The Core Challenge: Harmonization and the Pace of Innovation
The single greatest challenge in global AI governance is harmonization. A multinational corporation today may be legally bound by the EU AI Act in Europe, while its engineering teams use the NIST AI RMF as their guiding framework in the US, and its customers demand certification against an ISO standard. These different frameworks do not always align perfectly, creating a complex and sometimes contradictory compliance burden. Furthermore, the deliberate, consensus-driven process of creating formal standards often cannot keep pace with the blistering speed of AI development, creating a constant gap between the technology and its governance.
The Future of Governance: Dynamic Frameworks and Compliance as Code
The future of AI governance will have to become more dynamic to keep pace. We will see a move away from static, point-in-time certifications and toward "living" frameworks that are updated continuously. The most significant shift will be the rise of "compliance as code." This practice involves embedding governance and security rules directly into the automated MLOps pipeline. The system can then continuously and automatically check an AI model against these rules throughout its lifecycle, ensuring a state of constant compliance rather than relying on periodic manual audits.
CISO's Guide to Navigating the New Standards Landscape
CISOs must build a pragmatic governance program that draws from this complex ecosystem.
1. Do Not Wait for a Single, Unified Global Standard: A single, all-encompassing global law for AI is highly unlikely in the near future. The best strategy is to build a unified controls framework that maps to the requirements of the key standards relevant to your business (e.g., NIST, ISO, and EU regulations).
2. Adopt the NIST AI Risk Management Framework (RMF) as Your Foundation: For most organizations, the NIST AI RMF is the most practical starting point. Its risk-based, voluntary, and flexible approach allows you to tailor a governance program to your specific use cases rather than forcing a rigid, one-size-fits-all set of rules.
3. Treat the OWASP LLM Top 10 as a Mandatory Control Set: For any team in your organization that is developing or deploying applications that use Large Language Models, adherence to the OWASP Top 10 for LLMs should be considered a mandatory part of your secure development lifecycle policy.
Conclusion
The global standards for AI governance in cybersecurity are being forged in a dynamic and collaborative process led by a diverse group of stakeholders. While governmental bodies like the EU and US NIST are setting the high-level agenda on risk and legal liability, the detailed technical and practical guidance is coming from international standards bodies like ISO and practitioner-driven consortia like OWASP. The path forward for any organization is not to pick one standard, but to weave the principles from all of them into a unified, risk-based governance program that can manage the unique challenges of AI while keeping pace with its rapid evolution.
FAQ
What is AI Governance?
AI Governance is the process of directing, managing, and monitoring an organization's development and use of AI to ensure it aligns with ethical principles, legal requirements, and business objectives.
What is the EU AI Act?
It is a landmark piece of European Union legislation that regulates AI systems based on their level of risk. Systems deemed "high-risk," which can include cybersecurity tools, face strict requirements on data quality, transparency, and security.
What is the NIST AI RMF?
The NIST AI Risk Management Framework is a voluntary framework developed by the U.S. National Institute of Standards and Technology to help organizations identify, measure, manage, and govern the risks associated with AI systems.
What is ISO/IEC JTC 1/SC 42?
It is the international joint technical committee responsible for standardization in the area of Artificial Intelligence. They develop formal standards for things like AI terminology, frameworks, and security.
What is the OWASP Top 10 for LLMs?
It is a document created by the Open Web Application Security Project that identifies the 10 most critical security vulnerabilities found in applications that use Large Language Models.
Is the NIST AI RMF legally required?
No, it is a voluntary framework in the United States. However, it is so influential that it is becoming a de-facto standard for demonstrating due care in AI governance.
What does "high-risk" AI mean in the EU AI Act?
It refers to AI systems that could have a significant impact on people's safety, livelihoods, or fundamental rights. This includes AI used in critical infrastructure, medical devices, and law enforcement.
What is ENISA?
ENISA is the European Union Agency for Cybersecurity. It provides guidance and best practices on cybersecurity topics, including the specific security requirements for AI systems under the EU AI Act.
What is the role of the Cloud Security Alliance (CSA) in AI governance?
The CSA focuses on the unique challenges of securing AI workloads that run in the cloud, providing best practice frameworks for cloud-specific AI security and governance.
What about standards from China?
China is also heavily involved in creating its own national standards for AI governance, which in some cases may diverge from or compete with Western standards, contributing to the challenge of global harmonization.
How does MITRE ATLAS relate to governance?
MITRE ATLAS is a framework of adversary tactics against AI. While not a governance standard itself, it provides a crucial vocabulary for identifying threats, which is a key input into any risk management and governance process.
What is "compliance as code"?
It is the practice of codifying compliance rules and security controls into automated scripts and templates, which can then be used to automatically check a system for compliance within a CI/CD pipeline.
Why is harmonization of standards so difficult?
Different regions have different legal traditions, ethical priorities, and economic goals, which leads to divergence in how they approach AI regulation and governance.
Does AI governance apply only to cybersecurity AI?
No, AI governance applies to all uses of AI. However, the governance for AI used in cybersecurity is especially critical due to its potential impact on safety and security.
What is a "unified controls framework"?
It is an internal set of security controls that an organization creates by mapping the requirements from multiple external standards (like ISO, NIST, and PCI-DSS) into a single, comprehensive set of internal policies.
Is there an official certification for AI governance?
Certifications are emerging, particularly around ISO standards. For example, a company could get certified as compliant with the ISO/IEC 42001 AI management system standard.
What is the first step for a company starting with AI governance?
The first step is to create an inventory of all AI systems currently in use or in development within the organization to understand the scope of the governance challenge.
Who is responsible for AI governance in a company?
It is a shared responsibility, typically led by a CISO or Chief Risk Officer, but involving legal, compliance, data science, and business line leaders.
How does AI governance relate to data governance?
AI governance is an extension of data governance. It includes all the principles of data governance (like privacy and quality) but adds new considerations specific to models, such as fairness, explainability, and robustness.
Where can I find these standards?
The NIST AI RMF and OWASP Top 10 for LLMs are available for free online. ISO standards must be purchased from the International Organization for Standardization or national standards bodies.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0