← Insights

April 4, 2026 · By Sultan Meghji

NIST AI RMF in practice: a regulated-industry playbook

How to operationalize the NIST AI Risk Management Framework in a regulated enterprise — and why treating it as a checklist is the wrong instinct.

The NIST AI Risk Management Framework (AI RMF 1.0) is the most widely-referenced non-binding AI governance standard in the United States. It is voluntary. It is also, increasingly, the lingua franca that federal agencies, large customers, and auditors use when they ask a regulated firm, “How do you govern AI?”

I was inside the FDIC as federal agencies began to reckon with the draft framework in 2021. The firms that have succeeded with it since have not treated it as a standard to conform to; they have treated it as a diagnostic to run against themselves. And since July 2024, the picture has gotten more concrete: NIST’s Generative AI Profile (AI 600-1) extended the RMF with specific risks and controls for generative systems, which most regulated firms now have in production whether or not they have a governance program around them.

Most firms treat the AI RMF as a checklist. That is the wrong instinct. The framework is organized around four functions — Govern, Map, Measure, Manage — that are meant to interlock continuously, not be satisfied once. This piece translates each function into something an operating team can actually run.

Govern

Most firms under-build Govern. It is not a policy document; it is the organization’s ability to make and re-make decisions about AI in light of new risks. The minimum viable Govern capability is:

Map

Mapping is where most firms find out they do not actually know what they have. The function requires understanding each AI system in context: data sources, downstream consumers, the human decisions it informs, the failure modes that matter, and the affected parties. A good Map artifact for a single system fits on two pages and is written in English, not in ML jargon. If a senior examiner cannot understand it, it is not a Map artifact yet. For generative systems specifically, AI 600-1 adds three things a good Map needs to address: provenance of training data, confabulation risk, and the human decisions downstream of model output that may have been silently delegated to the model.

Measure

Firms most often confuse technical metrics with risk metrics at this stage. Model accuracy, F1, ROC-AUC — these are necessary but not sufficient. The metrics that actually belong in a risk report are:

All of these are observable. Most firms do not observe them.

Manage

This is where the framework meets the real world. Manage is incident response, model retirement, escalation, and — critically — the ability to turn the thing off. Every AI system in production should have a documented kill-switch procedure, an identified person who can execute it, and a rehearsed exercise of actually executing it. An AI system that has never been turned off in an exercise has not been Managed. This is a rule I apply to my own company’s production systems; I will not deploy an AI capability until I have confirmed I can revoke it.

The shortest possible RMF program that works

For a mid-sized regulated firm:

  1. One executive owner for AI risk.
  2. One monthly governance forum with a standing agenda.
  3. One Map artifact per Tier 1 or Tier 2 system.
  4. Four risk metrics reported quarterly: override rate, drift, complaints, incident MTTR.
  5. One documented and rehearsed kill-switch per Tier 1 system.

Those five artifacts, honestly maintained, cover most of what the NIST AI RMF asks for and nearly everything an examiner or customer audit cares about. The rest is refinement.


Virtova advises regulated enterprises on AI governance programs aligned to NIST AI RMF, the EU AI Act, and U.S. banking supervision. If you’re standing up or hardening an AI governance program, book a discovery call.

nist-ai-rmfai-governanceregulation

More from Virtova

Need an outside read on this?

Most engagements start with a 30-minute call.

Book a discovery call →