10 questions. Instant score out of 100. Graded across five dimensions aligned with NIST AI RMF, ISO 42001, and EU AI Act requirements. Takes 4 minutes.
NIST AI RMFISO/IEC 42001EU AI ActSR 11-7 (Financial Services)HIPAA AI Guidance
Your industry (affects regulatory context)
Your industry determines which specific regulations apply to your AI governance gaps.
ProgressQuestion 1 of 10
01 / 10NIST AI RMF: Govern · ISO 42001: Clause 4
Does your organization maintain a live inventory of every AI system currently deployed or in active development?
Why this matters: ISO 42001 Clause 4 and EU AI Act Article 6 both require organizations to identify and classify all AI systems before any governance framework can apply. You cannot govern what you have not inventoried. This is the foundational control.
Yes — we have a documented, maintained register of all AI systems including owner, purpose, data inputs, and decision impact
Partial — we have a list but it is informal, incomplete, or not regularly updated
We know our core AI systems but have not catalogued shadow AI or department-level tools
No — we do not have a formal AI inventory
02 / 10NIST AI RMF: Govern · ISO 42001: Clause 5 (Leadership)
Is there a named individual — not a committee — who is personally accountable for your AI governance program?
Why this matters: ISO 42001 Clause 5 requires top management to assign specific roles and responsibilities. The EU AI Act requires each high-risk AI system to have a named accountable owner by August 2026. Committees create diffused accountability. A single named owner creates enforceable responsibility.
Yes — a named C-level or senior leader owns AI governance with defined authority and board-level reporting
A role exists but it is shared across legal, IT, and compliance with no single decision-maker
AI governance is part of someone's job informally but not their primary responsibility
No designated owner — governance is handled reactively when issues arise
03 / 10NIST AI RMF: Map · EU AI Act: Articles 9 & 11
Before deploying a new AI system, does your organization produce and store documentation covering its purpose, training data, validation results, and known limitations?
Why this matters: EU AI Act Articles 9 and 11 require technical documentation to exist before a high-risk AI system is deployed — not after. SR 11-7 (Federal Reserve) requires model documentation for any model influencing financial decisions. Documentation created retroactively is a compliance liability.
Yes — all AI deployments require a model card / technical documentation package before go-live, stored in a central repository
We document high-priority systems but not all AI deployments systematically
Documentation exists but is created after deployment, incomplete, or stored inconsistently
No systematic pre-deployment documentation process
04 / 10NIST AI RMF: Measure · EU AI Act: Article 10 · EEOC AI Guidance
Do your AI systems undergo formal bias and fairness testing before deployment and on a scheduled basis in production?
Why this matters: EU AI Act Article 10 requires data governance and bias mitigation for high-risk AI systems. NYC Local Law 144 requires annual bias audits for AI used in hiring. EEOC guidance holds organizations accountable for disparate impact regardless of whether bias was intentional. Testing after an incident is too late.
Yes — structured bias and fairness testing runs before deployment and quarterly in production, with documented results
We test for bias before deployment but do not monitor continuously in production
Ad hoc testing only — done when a concern is raised, not systematically
No formal bias or fairness testing process
05 / 10NIST AI RMF: Manage · EU AI Act: Article 14 · ISO 42001: Clause 8
Is there a formally designated individual with explicit authority to pause, halt, or roll back any AI system in production — without requiring escalation?
Why this matters: EU AI Act Article 14 requires deployers to design for technically feasible human intervention. The NIST AI RMF calls this "stop authority" — the operational test of whether governance is real. If no named individual can halt an AI system unilaterally, governance exists on paper only. This was formally required by August 2025.
Yes — named individuals with documented stop authority for each production AI system, tested in our incident response plan
Stop authority exists in theory but the process is unclear and has not been tested
We can pause systems but it requires escalation through multiple approvals — no single person has clear authority
No defined stop authority or human oversight mechanism in place
06 / 10NIST AI RMF: Measure · ISO 42001: Clause 9 (Performance Evaluation)
Are your production AI systems monitored for performance degradation, data drift, and unexpected outputs — automatically and continuously?
Why this matters: ISO 42001 Clause 9 requires performance evaluation and continual improvement. SR 11-7 requires ongoing monitoring for model risk in financial services. AI systems degrade over time as real-world data drifts from training data. Manual periodic reviews miss drift that happens between review cycles.
Yes — automated monitoring with alerts for drift, performance degradation, and anomalous outputs. Dashboard visible to governance team.
Periodic manual reviews — quarterly or semi-annual performance assessments
Monitoring exists for infrastructure (uptime, latency) but not for model accuracy or output quality
No systematic production monitoring of AI model behavior
07 / 10NIST AI RMF: Map · ISO 42001: Clause 8 · GDPR: Article 5
Can you trace the lineage of training data for your AI systems — where it came from, how it was prepared, what personal data it contains, and what consent was obtained?
Why this matters: Model governance without data governance is a house with no back wall. GDPR Article 5 requires lawful basis for processing personal data used in AI training. EU AI Act Article 10 requires data governance documentation for high-risk systems. Without data lineage, you cannot answer a regulator's first question.
Yes — full data lineage documented for all AI training data, with PII inventory, consent records, and data source agreements
Lineage documented for primary training data but not for all data sources or enrichment layers
We know the general sources but do not have formal documentation or PII inventory
Data lineage is not documented — we cannot fully trace training data origins
When your AI systems make or influence decisions that affect individuals or regulated outcomes, can you explain the specific reasons for those decisions?
Why this matters: ECOA requires an adverse action notice explaining why a credit decision was made — AI cannot satisfy this with "the model said so." FINRA Rule 3110 requires documented rationale for AI-assisted investment recommendations. Explainability must be designed in — it cannot be retrofitted into a black-box model.
Yes — all consequential AI decisions include documented explanations, stored in audit logs accessible to regulators
We can explain decisions at the model feature level but not in human-readable terms for non-technical stakeholders
Some systems have explainability; others are black-box with no interpretability layer
Our AI systems cannot provide decision-level explanations
09 / 10NIST AI RMF: Manage · ISO 42001: Clause 10 (Improvement)
Does your organization have a documented AI incident response process — covering what constitutes an AI incident, how it is escalated, and what remediation steps are required?
Why this matters: ISO 42001 Clause 10 requires nonconformity and corrective action processes. EU AI Act requires post-market surveillance and incident reporting for serious incidents. Without a defined incident response process, the first time you need it is the first time you are building it — under pressure, during the incident.
Yes — documented AI incident response plan, tested in a tabletop exercise, with clear escalation paths and regulatory notification triggers
An informal process exists — people know what to do generally, but it is not documented or tested
AI incidents are handled through our general IT incident process, not an AI-specific framework
No defined AI incident response process
10 / 10NIST AI RMF: Map · ISO 42001: Clause 8.4 (Externally provided AI)
Do you apply governance controls to AI systems provided by third-party vendors — including SaaS tools using AI, APIs, and AI embedded in enterprise software?
Why this matters: ISO 42001 Clause 8.4 requires governance to extend to externally provided AI components. EU AI Act holds deployers responsible for third-party AI they put into production under their own authority. Using OpenAI, Anthropic, or any AI API in a customer-facing product makes you a deployer — with full governance obligations — regardless of who built the model.
Yes — all third-party AI tools are inventoried, assessed for risk, and covered under our governance framework with vendor agreements reviewed
Major third-party AI tools are assessed but SaaS tools using AI and embedded AI in enterprise software are not consistently reviewed
We review AI vendors during procurement but do not maintain ongoing governance for deployed third-party AI
Third-party and vendor AI is not covered by our governance framework
Your governance score is calculated.
Enter your work email to see your full score, category breakdown, regulatory urgency assessment, and prioritized recommendations. Full PDF report delivered to your inbox within 60 seconds.
No spam. No forced sales call. Unsubscribe any time.
0
out of 100
—
—
—
See the specific gaps in your environment.
This score identifies where you stand against NIST AI RMF, ISO 42001, and industry-specific regulations. A RERIGHT governance audit gives you a specific remediation plan in 5 business days.
Framework references: NIST AI Risk Management Framework (AI RMF 1.0, January 2023) — four core functions: Govern, Map, Measure, Manage. ISO/IEC 42001:2023 — AI Management System Standard, seven mandatory clauses. EU AI Act (Regulation EU 2024/1689) — risk-based classification, GPAI obligations effective August 2025, high-risk system requirements August 2026. SR 11-7 (Federal Reserve / OCC model risk guidance). FINRA Rule 3110 supervisory obligations. ECOA adverse action requirements. NYC Local Law 144 (AI in hiring). EEOC AI hiring guidance 2024. Scoring methodology: 10 questions across 5 governance dimensions, each scored 0–10, maximum 100 points. Grades: A (85–100), B (70–84), C (55–69), D (40–54), F (below 40).