AI Governance For Boards: The Minimum Viable Oversight Model

Two questions every board should be able to answer:

Where is AI being used across your business right now - including by staff on tools you haven't approved?

Which parts of your revenue are most exposed to AI changing what customers expect, or what competitors can charge?

If the first answer is vague and the second answer is "we're keeping an eye on it" - you've got a governance gap. Not a dramatic, front-page-news gap. Just the quiet kind where risk accumulates and strategic drift sets in until one day it isn't quiet anymore.

The good news: you don't need a 90-page policy suite or a standing committee of AI philosophers. You need a minimum viable oversight model - something that makes AI governable without creating a bureaucracy that slows everything down.

Here's what that actually looks like.

Two lanes, not one

Most AI governance conversation stops at the inside of the organisation. Policies, data rules, training, risk registers. All sensible. All necessary. All incomplete.

Because AI isn't just changing how you operate - it's changing what your customers expect, what your competitors can deliver, and whether your current business model holds up in three years.

So the model has two lanes:

  • Lane 1: Internal - how you use AI, and whether it's visible, controlled, and auditable.

  • Lane 2: External - how AI is reshaping your market, your margin, and your value chain.

Most organisations are somewhere in Lane 1. Very few are running Lane 2 with any discipline. That's where the strategic exposure lives.

Lane 1: Getting your house in order

Can you actually see where AI is being used?

You cannot govern what you cannot see. Which sounds obvious, but most organisations have AI scattered across three categories simultaneously: tools they've approved, tools they haven't approved but staff use anyway, and AI embedded in vendor platforms they've never thought to ask about.

A useful board-level test: can management name the top ten AI uses in the business today, and the top ten planned for the next twelve months? If not, the board is governing in the dark.

You need a living register of AI use - not perfect, but good enough to triage. What's being used, by whom, for what, and with what data involved.

Board artefact: A living AI register with simple risk classification (Low / Material / High), owned by management and reviewed quarterly.

Is your risk appetite actually written down?

"We'll be careful" is not a risk appetite statement. If AI is already in the business - and it almost certainly is - you need something more concrete:

  • Where can AI assist decisions versus make them?

  • What data is never going into an external tool?

  • What customer-facing uses require disclosure?

  • What always needs a human sign-off?

A one-page statement, reviewed annually, is enough to anchor the conversation. Without it, every AI decision gets made in isolation by whoever happens to be involved.

Board artefact: A one-page AI risk appetite statement, reviewed at least annually and whenever AI usage changes materially.

Are you classifying use-cases by actual risk?

Not all AI is created equal. AI summarising internal meeting notes is not the same as AI influencing a customer's eligibility decision. Treat them differently.

A simple three-tier model works well:

Tier Risk Level What It Covers
Tier 1 Low Internal drafting, generic summarisation, productivity aids with no sensitive data.
Tier 2 Material Customer communications, operational decisions, sensitive data, supplier AI dependencies.
Tier 3 High AI influencing regulated, safety-critical, or high-volume customer decisions; privacy exposure at scale.

The board's job is not to govern Tier 1. It's to make sure Tier 3 has proper approval, testing, and monitoring - and that nothing sits there by accident.

Board artefact: A tier model that determines which use-cases require approval, testing, and ongoing monitoring.

Do your controls match the tier?

Once you can see AI and classify it, you can set proportionate controls. A baseline applies everywhere:

  • Data rules - what data can be used, where, and how it's protected

  • Human oversight - when humans must review, approve, or explain

  • Transparency - when customers or stakeholders must be informed

  • Security - access control, logging, incident handling, supplier constraints

  • Quality - testing, monitoring, and version control for prompts and workflows

  • Recordkeeping - evidence sufficient to show how decisions were made

Higher tiers get additional requirements on top. If you want an external benchmark, ISO/IEC 42001 sets out what an AI management system should cover - you don't need to adopt it wholesale, but it's a useful reference for what "good" looks like.

Board artefact: A short AI minimum controls standard (2–4 pages) with tiered requirements.

What does your board actually receive?

Most AI reporting is either too technical or too vague. What boards need is simpler:

  • What changed this quarter - new uses, new vendors, new exposures

  • Top risks - three to five priority exposures and the reduction plan

  • Incidents - data issues, errors, customer complaints, near-misses

  • Control coverage - how many use-cases are registered, classified, tested, monitored

  • Decisions needed - risk appetite shifts, Tier 3 deployments, major vendor dependencies

Trend lines and exceptions. Not anecdotes and activity updates.

Board artefact: A quarterly AI governance dashboard with trend-based reporting.

Are you treating third-party AI as a governance issue?

In many organisations, the biggest AI risk isn't what they build. It's what they buy. AI is embedded in vendor platforms everywhere now - often without the vendor making much noise about it.

Board-level due diligence should cover:

  • Where AI is used in vendor services, and how it's governed

  • What data is stored, used for training, or shared

  • What assurance evidence exists - testing, incident reporting, control documentation

  • What happens if the vendor changes pricing, policy, or capability

AI clauses belong in contracts and renewals, not just in internal policies.

Board artefact: AI clauses and assurance expectations embedded into vendor onboarding, renewals, and major change notices.

Lane 2: Watching the market shift

This is the lane most boards are underinvesting in. Internal controls reduce operational risk. They don't protect you from your business model becoming less relevant.

Where does AI hit your industry first?

There's a small number of pressure points where AI typically changes industry economics before anything else:

  • Price and packaging - work shifts from bespoke to productised, fixed-fee, or subscription-like

  • Cycle time - competitors deliver in hours where you delivered in weeks

  • Quality parity - "good enough" becomes widely available, shifting differentiation toward trust, context, and accountability

  • Distribution - buyers change how they find and evaluate providers

  • Substitution - customers self-serve parts of what they used to pay for

You don't need to monitor all of AI. You need to monitor these pressure points applied to your specific market.

Board artefact: A one-page AI landscape heatmap, reviewed quarterly - pressure point, what's changing, likely impact, response.

What are your business model invariants?

This is a discipline worth introducing. Ask management to define four to six things that must remain true for the business model to work. For example:

  • We must retain the trust of enterprise clients

  • We must maintain a quality premium

  • We must protect key customer relationships and data flows

  • We must keep delivery economics within these tolerances

Then ask: which of those are threatened by what AI is doing in the market?

That converts "AI disruption" from an abstract concern into a concrete board agenda. What breaks first? What do we do about it?

Board artefact: An invariants-and-threats summary, refreshed as part of the strategy cycle.

What early warning signals are you tracking?

Not AI activity metrics - those tell you about internal adoption, not commercial exposure. The signals that matter are:

  • Win/loss narratives shifting toward speed, price, or "we can do it internally now"

  • Discounting frequency increasing in competitive deals

  • Scope compression as buyers carve off parts they now self-serve

  • Procurement language changing - requests for AI-enabled delivery, audit trails, disclosure

  • Competitor packaging shifting from capability talk to outcome promises

  • Client risk questions rising - IP, provenance, defensibility

If you're not tracking these, you may be a few quarters behind by the time the pattern is obvious.

Board artefact: A quarterly commercial AI signals report, integrated into normal strategy reporting.

What's your posture, by service line?

"We should adopt AI tools" is often necessary but not a strategy. For each service line or segment facing pressure, management should be able to name a clear posture:

Posture What It Means
Defend Protect premium; strengthen assurance; deepen differentiation.
Match Re-engineer delivery to meet new price and speed norms.
Reposition Change packaging, pricing, or target segments.
Partner Embed into platforms where distribution is shifting.
Exit Stop serving segments being commoditised.

Without posture decisions, you get drift.

Board artefact: A posture map by segment or service line, with 6–12 month execution plans.

Where will you be better than AI-enabled competitors in 18 months?

This is the most useful board question in the whole model.

Ask management three things:

  1. What are the three things we'll be better at than AI-enabled competitors?

  2. What makes those hard to copy?

  3. How will clients recognise and pay for them?

In most professional services contexts, durable advantage concentrates in:

  • Trust and assurance - evidence, defensible judgement, liability clarity

  • Domain depth - knowing the client's actual constraints, stakeholders, and real trade-offs

  • Integrated execution - moving from advice to implementation and outcomes

  • Risk transfer - standing behind results, not just producing analysis

AI tends to shift value away from producing content and toward standing behind decisions. That's worth building strategy around.

Board artefact: A short differentiation thesis that's testable in market messaging and sales conversations.

What "minimum viable" actually looks like

By the end of your next planning cycle, you should have:

  • A clear AI risk appetite statement

  • A living AI register with tier classification

  • Baseline controls with tiered add-ons

  • A quarterly assurance dashboard

  • AI clauses in supplier due diligence

  • Role-based AI literacy aligned to risk tiers

  • A quarterly external AI landscape heatmap

  • Business model invariants and early-warning indicators

  • A posture decision per service line

Not perfection. Not theatre. Just governance you can actually operate - and oversight that covers both what AI is doing inside your organisation, and what it's doing to your market.

Questions worth putting to management - right now

On internal governance:

  • Where is AI used today across products, operations, and suppliers - and where is it planned next?

  • Which use-cases are Tier 2 or Tier 3, and who approves them?

  • What data is prohibited from external AI tools, and how is that enforced?

  • What does an AI incident look like for us, and who is accountable for response?

  • What evidence could we produce if a regulator, client, or partner asked how we govern AI?

On external oversight:

  • Which parts of our offer are most likely to be commoditised by AI in the next 12–24 months?

  • What are our business model invariants, and which are now at risk?

  • What early signals would tell us the market has shifted - and are we tracking them?

  • What will we be better than AI at, and how will we prove it to clients?

If management struggles to answer, it's rarely a capability issue. It's usually a visibility and operating model issue.

Binary Refinery works with boards and leadership teams on AI governance and business model resilience. If you'd like to work through where the gaps are for your organisation - internal exposure and external business model risk - get in touch. That conversation usually takes less time than you'd expect, and tends to surface things worth knowing.


About the Author

Kat Mac is the founder of Binary Refinery, where she translates complex AI and technology topics into practical, business-led guidance for organisations. Her focus is simple: clarity, integrity, and strategy that genuinely helps leaders move forward.

Disclaimer: This article is for general information only. It isn’t legal, financial, or technical advice. Every organisation is different – get tailored guidance before making decisions that affect your people, data, or systems.

Previous
Previous

Where do you actually start with AI in your business?

Next
Next

AI Risk to Business Model: A Structured Framework for Executive Teams