AI

There are two ways to be an AI-enabled director. Only one works.

10 min read
May 13, 2026 8:43:23 AM

BoardPro recently asked directors across hundreds of boards a simple question: where is the biggest gap between what your board says about AI and what's actually happening inside your organisation? The responses were strikingly consistent. Directors didn't describe a problem of opinion. They described a problem of position.

"The board is AI-aware, but not AI-enabled."

"Management is leading the board on AI."

"We don't know what we don't know."

Those lines, taken from a recent BoardPro survey of directors and board members (the full findings will be published shortly), capture a sentiment most governance professionals will recognise immediately. And the pattern is global: Deloitte's 2025 survey of nearly 700 directors across 56 countries found 31% still don't have AI on the board agenda at all, and two-thirds describe their boards as having "limited to no knowledge or experience" with it.

The question facing most boards in 2026 isn't whether AI matters. It isn't even whether AI should be allowed. AI is already in the building, being used by staff, embedded in the tools your organisation pays for, shaping the papers you read – and the board hasn't yet caught up to its own organisation.

The harder question, the one most directors are quietly asking themselves, isn't in the board papers, it’s: what do we do now that we're behind?

That's the question this article addresses. Not "should we adopt AI?" That's already been decided, by absence as much as by intention. The question now is what good adoption looks like.

There are two very different versions of an AI-enabled director, and the choice between them will determine which boards govern well over the next five years.

Two paths

Every board adopting AI is choosing, knowingly or not, between two paths:

The automation path uses AI to replace work. Summaries replace reading the papers. Recommendations replace forming a view. Outputs replace effort. The director gets faster. They cover more ground in less time. But the work being replaced is precisely the work that builds judgment: the slow read, the mental cross-reference to last quarter's pack, the question that surfaces only because you sat with the numbers for twenty minutes. Over time, the director on this path becomes shallower. Their judgment atrophies because it isn't being exercised.

The augmentation path uses AI to sharpen work. The director still reads the papers, but uses AI to surface the question they didn't know to ask, the precedent in last quarter's pack they'd forgotten, the financial pattern across three years of board reports that no single paper would reveal. The director walks into the meeting better prepared than ever. They get sharper because AI handles the search and recall, freeing them to do the thinking that only a director can do.

Read the BoardPro upcoming survey and you’ll find directors articulating the augmentation path themselves, almost word-for-word. "AI is an enabler." "AI is already demonstrating its value not as a replacement for people, but as a tool that strengthens human work." "Productivity gains to allow humans to use their time to be more strategic."

This is a natural position the profession is arriving at. What's missing is the bridge between holding that view and acting on it.

One clarification, because this matters: not all automation is bad. No one needs to manually format a meeting agenda or hand-type minutes from a recording. The point isn't that machines shouldn't do clerical work. They should. The point is about judgment work. The reading. The questioning. The forming of a view. That work belongs to the director, and AI's job is to make it sharper, not to do it for them.

What good adoption actually looks like

If the augmentation path is the right one, what does it look like in practice? Four principles. Each one answers a tension that our aforementioned survey surfaced. Each one is something a director can carry into the next board meeting without needing a strategy retreat or a six-figure consultancy.

1. Adopt around a task, not a tool

One of the most consistent findings in the survey was the gap between talking about AI and doing anything with it. Directors described their boards as engaged in "much discussion and planning, little action," or "only superficially scratching the surface."

The fix is to stop adopting AI as a category and start adopting it task by task.

"Should our board use AI?" is a question with no useful answer. "What specific task can AI make better?" is a question that produces decisions. Pre-reading a 200-page board pack and flagging key risks. Recall from past papers. Drafting first-pass minutes for the secretary to refine. Pattern-spotting in three years of financial reports. Each of those tasks has a different risk profile, a different confidentiality consideration, and a different right answer. Treating them as one undifferentiated "AI question" is how boards end up in the discussion-without-action loop the survey describes.

A board that has decided AI is appropriate for board pack pre-reading but not for forming positions on strategic decisions has done more useful governance work than a board that has spent six months "exploring AI."

2. Use AI to sharpen director judgment, not substitute it

AI should deepen the thinking directors bring to the room, not replace it.

The richest use of AI by a director is to arrive at the meeting having interrogated the papers more thoroughly than they could have alone, finding the gap, the inconsistency or perhaps a question management didn't expect. That kind of preparation makes the boardroom conversation deeper. The director brings sharper questions. The discussion goes somewhere it otherwise wouldn't.

The thinnest use is the director who pulls out their phone mid-meeting and asks an AI tool what they should do.

The first deepens the room. The second hollows it out, and worse, it confuses the source of the judgment being applied. The boardroom is where directors think together. AI's place is in the work that supports that thinking, including the preparation beforehand, and increasingly the capture and recall around it, not in forming the judgment itself.

As AI moves further into the meeting itself, through minute-takers, retrieval tools, and real-time summarisers, the test stays the same: is this tool helping directors think, or thinking for them?

This principle also responds to a concern the survey surfaced repeatedly: that boards risk becoming dependent on management to interpret AI for them. Using AI on your own terms, to sharpen your own judgment, is how a director closes that gap themselves rather than waiting for management to close it for them.

A board that has decided AI is appropriate for board pack pre-reading but not for forming positions on strategic decisions has done more useful governance work than a board that has spent six months "exploring AI"

3. Verify before you trust

The most dangerous director isn't the one who refuses to use AI. It's the one who trusts it without verification.

Treat AI output the way you'd treat advice from a junior analyst: useful, often right, occasionally confidently wrong, and always something you check before acting on. AI tools hallucinate. They invent citations. They mis-summarise nuance. They are particularly prone to giving you an answer it thinks you want..

This is also where the survey's strongest scepticism lands, and rightly so. One respondent put it bluntly: "both the board and staff do not fully know what they don't know and both are unwittingly using AI and potentially exposing the organisation to risk." That instinct to be cautious is a good director's instinct. It shouldn't be argued out of. It should be channelled into a discipline.

The discipline is verification. If AI surfaces a number, you check the number against the source paper. If it summarises a position, you read enough of the original to confirm the summary is fair. If it suggests a question, you ask whether the question is the right one for the room, not just the most interesting one for the model.

This is slower than just trusting the output. It's also the difference between using AI well and being used by it.

4. Have a board position, not just personal habits

This is the principle the survey makes most empirically grounded, and the one most boards have not yet acted on.

Director after director described AI being used widely across their organisations with no policy, no governance framework, and no board position. "Adhoc AI use with no governance." "We have yet to write up a policy for AI use however we are all using it." "Shadow AI use is something that should have more concern at a Board level."

Boards that leave AI adoption to individual director preference, with some quietly using ChatGPT, some refusing to touch it, none of them talking about it, end up with a fragmented mess. There's no shared understanding of what's acceptable, no record of what's being uploaded where, and no framework for when something goes wrong. The board has effectively delegated the question to whoever happens to be most comfortable with the technology, which is rarely the same person as the one most attuned to the governance risk.

A short, written board position on AI use is now part of governance hygiene, alongside conflict registers and confidentiality protocols. It doesn't have to be elaborate. It needs to cover what tools are sanctioned, what data must never be uploaded, what's expected of directors using AI for board work, and how the position will be reviewed.

Note: Boards looking to put a position in writing don't have to start from a blank page. BoardPro's whitepaper The AI-enabled board – written with governance experts Helen van Orton and Alexie O'Brien – sets out the foundational concepts, the questions to put to your board, and the practical use cases that separate ad-hoc AI use from AI use that's governed appropriately. It also includes a policy template you can adapt.

The survey makes the cost of not doing this clear. Boards without a position aren't avoiding the AI question. They're answering it accidentally, one director at a time.

Treat AI output the way you'd treat advice from a junior analyst: useful, often right, occasionally confidently wrong, and always something you check before acting on.

 

The risk of doing nothing

It's tempting to read all of this and conclude that "wait and see" is the cautious option. It isn't. Wait-and-see is its own answer to the AI question, and it's an answer the survey suggests most directors are already uneasy about.

While boards deliberate, AI is shaping board practice. Directors are using it privately to interrogate papers, with no shared norms about disclosure or appropriate use. Management is using it to prepare the papers the board reads, sometimes by pasting confidential material into free public tools the board has never sanctioned. Minute-takers, summarisers, and meeting tools are arriving in boardrooms whether or not the board has formed a view on them. And beyond the boardroom, AI is moving through the organisation in staff workflows, third-party tools, and customer interactions the board has limited visibility into. One respondent captured the consequence in a single line: "the board doesn't know what's happening in the organisation."

The longer the gap between board position and actual practice persists, the more the board cedes its governance role, not by deciding to, but by not deciding. The cost of that drift is now measurable. IBM's 2025 Cost of a Data Breach Report found that organisations with high levels of shadow AI (staff using AI tools outside any policy) paid an average of USD $670,000 more per breach than those with little or none. ASIC has formally warned of "a governance gap that risks widening if AI adoption outpaces governance." When the corporate regulator and the breach economics both point to the same conclusion, "wait and see" stops being a neutral position.

Decisions about how AI is used in and around your boardroom are being made right now. If the board is absent from them, they're being made elsewhere, by people whose job is to move quickly rather than to govern.

This is fixable, and the fix isn't heroic. It doesn't require a consultancy, a strategy retreat, or a board with deep technical expertise. It requires the four principles above, a willingness to start adopting AI around specific tasks, and a written board position that other directors can read, push back on, and improve.

A board that does those four things in the next two meetings has closed more of the AI gap than most of its peers will close in a year.

The directors who'll thrive

This article opened with the question: what do we do now that we're behind? The answer isn't a strategy retreat or a six-figure consultancy. It's a different question for the next board meeting. Not "should we allow AI?" – that question is already past its expiry. The better question is: what kind of board do we want to be when we use it?

The directors who'll thrive in the next five years won't be the ones using AI the most. They'll be the ones using it to be sharper, which means better prepared, more thoughtful, asking the questions only a director who has done the work can ask. They'll be the ones who treat AI as a way to honour the seriousness of the role, not a way to skip past it.

That's a choice every board can make, starting right now.

The next step: From principles to practice

The four principles in this article are a starting point. Putting them into practice, including choosing which tasks AI is right for, agreeing on what verification looks like and writing a position the whole board can stand behind, is where most boards get stuck.

1. Download the whitepaper
BoardPro's whitepaper The AI-enabled board: Leveraging secure AI to transform governance, improve productivity, and reduce the cost of board operations is written for that next step. It covers which AI tools are appropriate for board-level content, the risks and safeguards that matter, the questions to put to your board, and a policy template you can adapt.

Download the whitepaper

2. Stay tuned for the full research

The insights shared in this article are just a preview of our extensive study of hundreds of directors from New Zealand, Australia and beyond. The full results will be published shortly, providing a comprehensive look at the state of AI in the boardroom in 2026.

If you haven’t already, follow us on LinkedIn to be the first to receive the full report and see exactly where the "governance gap" lies – and how to close it.

Take this article into your next board meeting

Get the PDF version, formatted for printing or sharing with your board.

Get your CEO report guide and template

Get Email Notifications