Your organization is probably using AI already. Maybe it’s ChatGPT in the marketing department, Copilot in the development team, or an AI-powered feature quietly embedded in a SaaS tool you renewed last quarter. The question isn’t whether AI is in your environment - it’s whether you have any governance around it.

The OWASP Top 10 for LLM Applications team published a Cybersecurity and Governance Checklist that gives security leaders a structured starting point for building an AI governance program. It’s practical, it’s free, and it covers the gaps that most organizations don’t realize they have until something goes wrong.

This article summarizes the main areas from that checklist, adds context from the EU AI Act and provides actionable best practices you can start applying today. We’ve also created a free AI Governance Self-Assessment that you can use to evaluate your maturity, track progress, and generate a PDF report.


Why AI governance matters now

Three forces are making AI governance non-optional.

Regulatory pressure is real and imminent. The EU AI Act entered into force in August 2024 and is being phased in through 2026. It classifies AI systems by risk level and imposes specific obligations for high-risk systems - including conformity assessments, transparency requirements, and human oversight mandates. Organizations operating in Europe or serving European customers cannot afford to ignore this.

The attack surface is larger than before. LLMs introduce a fundamentally different kind of vulnerability. The control and data planes cannot be strictly isolated. Models are nondeterministic by design, meaning the same prompt can produce different outputs. Prompt injection, data poisoning, and model theft are not theoretical - they’re happening in production environments today.

Shadow AI is your biggest immediate risk. Before worrying about sophisticated adversarial attacks, most organizations need to address the fact that employees are already using unapproved AI tools, pasting sensitive data into public models, and installing browser plugins that introduce LLM features without going through any approval process.


The OWASP LLM governance framework

The OWASP checklist organizes AI governance into 13 areas. Here’s what matters in each one and how to act on it.

1. Adversarial risk assessment

Before deploying any AI solution, understand how adversaries might exploit it - and how AI-enhanced attacks could target your existing systems.

Best practices:

  • Assess how competitors are investing in AI - falling behind creates its own business risk
  • Review whether existing security controls (voice-based authentication, CAPTCHAs) still work against GenAI-enhanced attacks
  • Update your incident response plan specifically for AI-related incidents: prompt injection exploits, data exfiltration through AI tools, deepfake-based social engineering

The OWASP checklist emphasizes that organizations face threats from both using AI and not using it. A competitor advantage gap is a legitimate risk to evaluate alongside technical threats.

2. Threat modeling for AI

Standard threat modeling applies to AI systems, but the threat surface is different. LLMs blur the boundary between code and data - user inputs directly influence model behavior in ways that traditional input validation cannot fully address.

Questions for your threat model:

  • How could GenAI accelerate attacks against your organization? Think hyper-personalized spear phishing at scale
  • Can you detect and neutralize harmful inputs to your LLM solutions?
  • Are all trust boundaries between LLM components and existing systems secured?
  • Do you have insider threat mitigation for authorized AI users?
  • Can you prevent unauthorized access to proprietary models or training data?

3. AI asset inventory

You cannot secure what you don’t know about. This is the single most important first step, and most organizations fail here.

What to catalog:

  • Every AI service, tool, and platform in use (including “shadow AI” tools employees adopted on their own)
  • AI components in your Software Bill of Materials (SBOM)
  • Data sources feeding AI systems, classified by sensitivity level
  • Ownership assignments for each AI asset

Best practices:

  • Add an “AI” tag to your existing asset management system - in CISO Assistant, this means creating assets specifically for AI tools and linking them to the relevant risk scenarios
  • Create a formal AI solution onboarding process that requires security review before adoption
  • Scan SSO logs and expense reports quarterly to catch unauthorized AI tool adoption

4. AI security and privacy training

Generic security awareness training is not enough. AI introduces novel risk categories that employees across all levels need to understand.

Training should cover:

  • Ethics, responsibility, and legal implications of AI use (copyright, licensing, warranty)
  • Updated threat awareness: voice cloning, image deepfakes, AI-enhanced spear phishing
  • Clear acceptable use policies for different AI tools (what data can be shared, what cannot)
  • Specialized training for developers on secure AI pipeline practices (MLSecOps)

The OWASP checklist emphasizes creating a culture of transparent communication about how the organization uses AI - both internally and with customers. This is becoming a regulatory requirement under the EU AI Act’s transparency obligations, and good governance regardless.

5. Business case validation

Not every AI use case is worth the risk. Establish clear business cases before deploying AI solutions.

Common legitimate use cases:

  • Better customer experience
  • Operational efficiency and automation
  • Knowledge management and document processing
  • Market research and competitor analysis
  • Code assistance and development acceleration

Best practice: For each proposed AI use case, require a documented risk-benefit analysis that evaluates data sensitivity, regulatory impact, and fallback procedures if the AI system fails or produces incorrect outputs.

6. Governance structure

This is the organizational backbone of your AI program. Without clear ownership and accountability, everything else falls apart.

What your governance structure needs:

  • RACI chart for AI - who is responsible, accountable, consulted, and informed for each AI system
  • Data management policies - classify what data can and cannot be used with AI tools, with technical enforcement where possible
  • AI-specific policy - a standalone policy or significant extension of your information security policy covering AI acceptable use, data handling, and risk management
  • Acceptable use matrix - a clear, accessible document that tells employees which AI tools are approved for which purposes

The OWASP checklist recommends documenting the sources and management of any data the organization uses from generative LLM models. This matters for both GDPR compliance and EU AI Act conformity.

AI introduces legal risks that most organizations haven’t addressed. The OWASP checklist identifies several areas that require legal review.

Legal areas to review:

  • Product warranties - who is responsible when AI-generated output is incorrect or harmful?
  • EULA review - AI platform EULAs vary widely in how they handle user prompts, output rights, data privacy, and liability
  • Intellectual property - AI-generated code or content may not be copyrightable and could contain infringing material from training data
  • Indemnification - establish clear guardrails for determining liability between AI provider and user
  • Insurance coverage - traditional D&O and general liability policies may not cover AI-specific risks
  • Employment law - AI tools used in hiring or employee management can create discrimination liability

Best practice: Conduct a legal AI risk assessment with your legal team or external counsel. Don’t wait until an incident forces the conversation.

8. Regulatory compliance

The regulatory space for AI is moving fast. The EU AI Act is the most comprehensive framework so far, but it’s not the only one.

EU AI Act risk classification:

Risk LevelExamplesRequirements
UnacceptableSocial scoring, real-time biometric surveillanceProhibited
High-riskAI in hiring, credit scoring, critical infrastructureConformity assessment, human oversight, transparency, documentation
Limited riskChatbots, AI-generated contentTransparency obligations (users must know they’re interacting with AI)
Minimal riskSpam filters, AI-powered searchNo specific requirements

Best practices:

  • Classify every AI system against the EU AI Act risk tiers
  • For high-risk systems, prepare conformity assessment documentation
  • Track regulatory developments in your operating jurisdictions - US states are passing AI-specific laws, Canada’s AIDA is pending
  • Document how AI tools used in hiring or employee management comply with anti-discrimination requirements

9. Secure implementation

When deploying LLM solutions, security must be embedded in the architecture, not bolted on afterward.

OWASP implementation checklist highlights:

  • Threat model all LLM components and trust boundaries
  • Classify and protect data based on sensitivity - models should only access data at the minimum classification level of any user
  • Implement least-privilege access controls with defense-in-depth
  • Validate all inputs and filter/sanitize all outputs
  • Secure the training pipeline: data governance, model integrity, algorithm verification
  • Include LLM-specific vulnerability assessments in your release process
  • Monitor and log all LLM interactions with tamper-proof audit records

Don’t overlook supply chain security. Request third-party audits and penetration testing from AI providers - both initially and on an ongoing basis. Check for known vulnerabilities in LLM models and their dependencies.

10. Testing, evaluation, verification, and validation (TEVV)

The NIST AI Framework recommends continuous TEVV throughout the AI lifecycle. This isn’t a one-time check - it’s an ongoing process.

Best practices:

  • Establish TEVV as a continuous process, not a pre-deployment gate
  • Include AI system operators, domain experts, designers, users, and auditors in the evaluation
  • Provide regular executive metrics on AI model functionality, security, reliability, and robustness
  • Recalibrate and retest after any model update, data refresh, or configuration change

11. Model cards and risk cards

Model cards provide standardized documentation about an AI model’s design, capabilities, training data, performance metrics, biases, and limitations. Risk cards supplement this with potential negative consequences.

Best practices:

  • Review the model card for any AI model before deployment
  • If no model card exists, treat it as a red flag requiring additional due diligence
  • Maintain internal model cards for any models you deploy, including third-party models
  • Document known biases, limitations, and recommended use cases vs. prohibited use cases

12. RAG optimization security

Retrieval-Augmented Generation (RAG) is increasingly used to ground LLM outputs in organizational knowledge bases. While it improves accuracy, it introduces its own security considerations.

Concerns to address:

  • Vector databases storing document embeddings must be secured like any other data store
  • Access controls on the knowledge base must be enforced at the retrieval layer - the LLM should not be able to retrieve documents the user isn’t authorized to see
  • Poisoned documents in the knowledge base can manipulate LLM outputs

13. AI red teaming

AI red teaming simulates adversarial attacks against AI systems to identify exploitable vulnerabilities. Multiple regulatory bodies, including the Biden administration’s executive order and the EU AI Act, recommend or require it for high-risk systems.

Best practices:

  • Incorporate red team testing as a standard practice for all AI models and applications
  • Combine red teaming with other TEVV activities - red teaming alone does not validate all real-world harms
  • Test for prompt injection, jailbreaking, data extraction, and model manipulation
  • Document findings and track remediation

Mapping to CISO Assistant

If you’re already using CISO Assistant for your compliance program, AI governance maps naturally to the existing structure.

Risk assessment: Create AI-specific risk scenarios in your risk register - AI data leakage, AI hallucination, prompt injection, model poisoning. These should link to your AI tool assets and applicable controls.

Asset management: Register each AI tool as an asset with appropriate security objectives. Pay special attention to privacy ratings - employees share more data with AI tools than most organizations realize.

Vendor management: Treat AI providers as high-priority third parties. Use your vendor security process to assess their data handling, model security, and compliance posture.

Compliance mapping: The EU AI Act and ISO 42001 (AI Management System) frameworks are available in CISO Assistant, letting you map your controls to AI-specific requirements alongside ISO 27001 and NIS2.

Want to see this in practice? Try our CISO Assistant demo - it has AI governance frameworks pre-loaded and ready to explore.


Building your AI governance procedure

We’ve created a free AI Governance Self-Assessment that synthesizes the OWASP checklist, EU AI Act requirements, and practical implementation experience into an interactive checklist you can use to evaluate your readiness. It covers:

  • AI governance structure and RACI assignments
  • AI risk classification aligned with the EU AI Act
  • Acceptable use policies for AI tools
  • AI asset inventory requirements
  • Data handling rules for AI systems
  • Security requirements for LLM deployment
  • Incident response procedures for AI-specific events
  • TEVV and monitoring requirements
  • Legal and regulatory compliance obligations

Generate a customized PDF with your organization’s details and use it as the foundation of your AI governance program.


Common mistakes to avoid

Treating AI governance as purely an IT problem. AI governance requires collaboration between security, legal, HR, compliance, and business units. The RACI chart is not optional.

Focusing only on external threats. Shadow AI - employees using unapproved tools with sensitive data - is the most common and most immediate risk. Address this first.

Creating policies nobody reads. An acceptable use matrix pinned to the intranet is better than a 40-page policy document nobody opens. Make governance accessible.

Ignoring the EU AI Act because “we’re not in Europe.” If you serve European customers or process their data, the regulation applies to you. The extraterritorial reach is broad.

Waiting for perfect governance before using AI. The risk of not using AI (competitive disadvantage, operational inefficiency) is also real. Build governance iteratively alongside adoption, not as a blocker to it.


AI governance should make AI adoption secure and compliant, not block it. Use the OWASP checklist as your structure, the EU AI Act for regulatory context, and CISO Assistant to manage it all in one place. Start with an asset inventory and build out from there.

If you need help integrating AI governance into your existing ISMS or deploying CISO Assistant with AI-specific frameworks, that’s what we do. Get in touch - we’ve helped organizations across Europe with exactly this.