AI

AI risks for boards: What do directors need to know?

13 min read
May 5, 2026 10:35:06 AM

Summary

AI risks for boards are no longer theoretical. From shadow AI and deepfake fraud to autonomous AI agents acting without human oversight, the risks sitting inside organisations right now are real, financial, and — in most cases — ungoverned. This article, based on a BoardPro webinar with governance experts Helen van Orton and Alexie O'Brien, maps the nine AI risks boards need to track, explains the four most urgent in depth, and sets out what good AI governance looks like in practice.

  • AI is already active in tools your organisation pays for — Microsoft Copilot, Google Gemini, Slack, Canva — often without a board-level decision to turn it on
  • Nearly half of employees globally are using AI in ways that don't comply with company policy, if one exists at all
  • Agentic AI — systems that act autonomously without human review — is already embedded in mainstream tools and raises serious questions about authorisation, permissions, and audit trails
  • Deepfake fraud is scaling rapidly; voice cloning now requires as little as three seconds of audio
  • Good AI governance does not require technical expertise at board level — it requires the right questions, visibility, and a consistent framework

What you'll learn 

 

“If your board hasn’t set a framework for AI use, your people have already built their own,” says Helen van Orton. “They’re using whatever tools they find useful, however they see fit. The question isn’t whether your board is going to lead the conversation on AI — it’s whether you’ll lead it, or discover afterwards that something has gone wrong.”

The risk that AI poses to organisations is rapidly moving from theoretical to real. Combined with the gap between what’s happening on the ground and what’s actually being governed from the top, there’s a need to assess how boards mitigate AI risk.

This article explains the AI risks directors need to know and offers steps for governing AI from governance experts Helen van Orton and Alexie O’Brien.

Why is AI a board issue?

Employees no longer need to search for AI tools; AI is now embedded in the tools an organisation already uses. Microsoft Copilot is in your Microsoft environment. Gemini is in your Google Workspace. Canva, Slack, and a growing list of other platforms all have AI capabilities switched on. And in most cases, a conscious, board-level decision to turn AI on was never made.

“Almost half of employees globally are using AI in ways that don’t comply with a company policy, if a policy even exists,” explains Alexie O’Brien. “And they’re not trying to be malicious — they’re actually trying to be productive. But without guardrails, that productivity comes at a governance cost.”

The financial stakes are no longer hypothetical either. According to IBM/Ponemon’s Cost of a Data Breach Report 2025, organisations with a high level of shadow AI (the use of AI without the organisation’s permission) paid an average of $670 000 more per breach than those with little or no shadow AI. Furthermore, only a minority of organisations have the technical controls in place to prevent employees from uploading confidential data to public AI tools.

When management pushes back on board questions about AI, saying that it’s too operational, that’s actually not correct, says O’Brien.

“As boards, we need to make sure we’ve got a clear strategy around AI. We’re not asking how you’re configuring the tools or even how you’re using them. We’re asking have we got the appropriate guardrails around the technology that touches on data privacy, on cyber risk, IP protection, regulatory governance, as well as workforce strategy. This points to the fact that AI is squarely a board issue.”

The risks boards need to track

There are five AI risks that your board should already have on their radar, and four more urgent risks.

Monitoring the following five risks should already be a business-as-usual activity.

Hallucinations

Hallucinations are when AI tools present false information with complete confidence. When using this information, the liability sits with your organisation, regardless of whether a human or an algorithm made the call. ‘AI told me’ is not a legal defence.

Bias and discrimination

AI can make unfair decisions about people, and at scale. Again, if your organisation relies on a tool trained on biased data, then the board still carries the associated liability.

Undocumented AI infrastructure

AI infrastructure becomes risky when people build workflows, automations, and custom AI tools that encode critical business logic with no IT oversight. If that person leaves the organisation, the work they did using AI can’t be understood, maintained, or audited. Continuity becomes impossible.

Vendor and supply chain risk

AI is embedded in the products your organisation is purchasing, often without disclosure. Your HR platform, CRM or accounting software may be making AI-powered decisions without your express knowledge.

Insurance coverage gaps

D&O and professional liability policies are increasingly uncertain regarding AI-related incidents. Some major US insurers have taken the step to introduce explicit AI exclusions. It’s important to check your coverage for AI use and have a conversation with your broker.

The following four risks — shadow AI, agentic AI, prompt injection, and deepfake are often 'invisible' risks that require urgent attention from organisations.

Blog images - inline (4)
What are the 9 AI risks boards need to know?
Hallucinations, bias and discrimination, undocumented AI infrastructure, vendor and supply chain risk, insurance cover gaps, shadow AI, agentic AI, prompt injection, and deepfake.

Shadow AI: Do you know what tools your organisation is using?

Shadow AI is the use of AI tools, often free, publicly available ones, without the organisation’s approval, visibility, or controls. And it does not look like a breach. It looks like your best people trying to do their best work.

Shadow AI is the financial analyst building a reconciliation dashboard and pasting ledger data into ChatGPT. It’s the executive using a browser extension to summarise every meeting, including ones they shouldn’t be recording. It’s also the team member who uploads a board pack to a free AI tool to get a quick summary before a meeting. None of these people are going rogue — but none of the information they are sharing with the AI tools sits within your governance framework either.

With shadow AI, the tools are not vetted, and no one really knows where the data goes. What makes shadow AI particularly hard to govern is that it has moved beyond standalone apps. For example, Microsoft Copilot has been found reading emails marked confidential, bypassing data loss prevention policies organisations had put in place specifically to prevent that activity.

“Even if you have all your machines locked down, AI is multi-modal,” says van Orton. “There is nothing to stop someone from taking a photo of a screen and putting it into their own GPT. This is why the use of AI is a people-led conversation as much as a policy-led one.”

Meetings also deserve more than a mention.

“If you are recording meetings that sit outside your governance frameworks, there’s a whole piece around confidentiality and smoking guns — and that data being available in court is particularly relevant when you’re recording a board meeting,” cautions van Orton.

Ask yourself and your board: if someone asked your board tomorrow what AI tools your organisation is currently relying on, could you answer?

Agentic AI: the risks when AI works without you

Most people understand AI tools as reactive — you ask a question, and you get an answer. Agentic AI is fundamentally different, and it is already inside your organisation whether you have consciously deployed it or not.

The role of an AI agent is not only to respond, but also to act. It executes tasks, triggers workflows, and makes decisions — often without a human reviewing the action. It might be your CRM responding to a customer inquiry, your accounting software categorising invoices, or your calendar making scheduling decisions. These aren’t dramatic deployments. They are quiet, incremental automations that are increasingly switched on by default.

The governance questions this raises have serious implications. Who authorised the AI tool to take the action? What are the boundaries of its permissions? What happens when it acts on bad or hallucinated data? Is there an audit trail?

“Unless you’re super clear on where to stop and where you want the human in the loop, it will follow your instructions — and what you’ve inadvertently asked it to create and do,” cautions O’Brien.

Gartner has projected that by 2029, at least 50% of knowledge workers will need to develop skills to work with, govern, and create AI agents for complex tasks. That is not an IT prediction. It’s a prediction about your frontline people in finance, operations, marketing, and customer experience.

“The systems are doing exactly what we tell them to do,” says van Orton. “We just might not have meant exactly what we wanted them to do.”

A question for your board — have you defined where AI agents are permitted to operate in your organisation, what permissions they hold, and where the human checkpoint sits?

Prompt injection: when AI follows instructions you never gave

Prompt injection sounds technical, but the concept is straightforward. It is the AI equivalent of social engineering — except the target is the AI, not the person.

Here is how it works. A bad actor hides an instruction inside a document, an email, or a data source. The instruction may be invisible to a human reader — white text on a white background, for example. But when your AI processes that content, summarising it or responding to it, it reads the hidden instruction and follows it. The result could be data leaking from your systems, a security policy being ignored, or a misleading summary being generated — all without anyone realising.

Spreadsheets carry the same risk. Researchers have demonstrated that AI tools reading large spreadsheets — which may contain hundreds of thousands of rows — can be tricked into taking actions like exfiltrating financial data by hidden instructions. If your teams are using AI to process files from external sources, the attack surface is real.

"If someone in your team downloads a skill or plugin from an unauthorised source, there could be a prompt injection embedded in it that they've unknowingly brought into your organisation,” cautions van Orton.

The governance implication: anywhere AI can read content it didn't write — spreadsheets, PDFs, emails, slides — the attack surface exists. Your teams need to understand this risk and apply appropriate caution to the files they feed AI tools.

Deepfake: learn to question what’s real and what’s not

Deepfake fraud is scaling rapidly. According to Deloitte Insights, one report found that deepfake incidents in fintech increased by 700% in 2023. Generating a convincing voice clone now requires as little as three seconds of audio from a source such as a conference presentation, a webinar recording, or a voicemail. Anyone with a public profile can be exposed.

For boards, the governance implications are direct. Voice and video can no longer be treated as reliable authentication. Financial controls that depend on verbal authorisation need to be reviewed. Your teams need to know that a video call showing the CEO, CFO, and several other executives — all requesting an urgent transfer — may not be real.

O’Brien notes a compounding risk."If one of your team members has already leaked data about a current project through shadow AI, a fraudster has current, accurate information about your organisation to make their impersonation more convincing."

The question is not just whether your board knows this risk exists. It is whether your financial controls have been updated to account for a world where seeing and hearing are no longer enough.

What is good AI governance? Moving from awareness to action

Good AI governance does not require technical expertise at the board level. But it does need the right questions, consistent visibility, and a framework that reflects how the organisation actually operates, rather than what the policy document states.

Five elements that move a board from awareness to action are:

  1. Get AI onto your risk matrix: Ensure AI is more than a single line item, and is across the dimensions that matter: data security, privacy, strategic alignment, and insurance coverage. If AI isn't on your risk register, you haven't made a conscious decision about it.

  2. Define your risk appetite for AI: Some parts of AI represent significant opportunity, not just risk. Your board needs to have an explicit conversation about where your organisation sits on that spectrum and reflect it in your risk appetite statement.

  3. Make AI a standing board agenda item: AI needs to be a regular, structured conversation, not a one-off deep dive or a crisis response. The landscape is moving too fast for annual reviews.

  4. Add AI literacy to your board skills matrix: This does not need to be a technical deep dive, but does need to be detailed enough to ask the right questions and set direction with confidence. Your ability to govern what you don't understand is limited.

  5. Get honest visibility on what AI tools are actually in use: Know the tools your people are using on personal accounts, personal devices, and through embedded features in sanctioned software — not just the tools that the IT team has approved. You cannot govern what you cannot see.

For practical advice on AI and governance, download the whitepaper The AI-enabled board: Leveraging secure AI to transform governance, improve productivity, and reduce the cost of board operations. 

Make sure you are paying for whatever you use, with a plan that has the model not trained on your data and enterprise security around it. That’s the most important guidance we can give you.

The next step — reviewing your AI use and policies

When your risk committee reviews AI, van Orton suggests four lenses:

  1. Risk appetite — have you explicitly discussed what your organisation's appetite for AI risk is? This conversation needs to happen before an incident, not after.

  2. Visibility — do you actually know what AI is in use across your organisation? This includes tools IT has sanctioned, tools people are using on personal accounts, and AI features embedded in the software you're already paying for.

  3. Strategy — AI is not only a risk conversation. It affects your business model, your competitive position, and your strategic plan. Boards that are only discussing AI as a risk are not governing well. The opportunity side of the conversation is equally important.

  4. Assessment — what framework does your organisation use to distinguish between high-risk and low-risk AI applications? Not all AI carries the same risk. Your governance approach needs to reflect that.

AI regulations in New Zealand and Australia

The regulatory picture is still developing. In New Zealand and Australia, current obligations sit primarily within existing frameworks — the Companies Act, Privacy Act, and relevant sector-specific regulation. Australia's Privacy Act is being updated, with changes due to take effect in December 2026.

For organisations operating in or selling into the European Union, the EU AI Act creates binding obligations now. Fines for non-compliance reach up to 7% of global annual revenue or €35 million — whichever is higher — with the largest penalties applying to breaches related to bias and fairness. If your organisation has any EU nexus, this is not a future consideration.

The pace of case law is also accelerating. Directors' duties apply regardless of how the information was processed.

FAQ

What is shadow AI?

Shadow AI is the use of AI tools by employees without the knowledge, approval, or oversight of their organisation's IT or governance teams. It typically happens not out of malice but because employees are trying to be productive and approved tools aren't keeping pace. The risk is that sensitive data — client information, financial data, strategic plans — leaves your organisation's security perimeter without anyone realising.

What is agentic AI?

Agentic AI refers to AI systems that don't just respond to queries but take autonomous actions — executing tasks, triggering workflows, and making decisions without requiring a human to review each step. It is increasingly embedded in mainstream tools, from CRM systems to accounting software, often switched on by default.

What is prompt injection?

Prompt injection is a type of attack where hidden instructions are embedded in content that an AI will process — a document, email, or spreadsheet. When the AI reads the content, it picks up the hidden instruction and follows it, potentially leaking data, bypassing a security policy, or generating a misleading output. It doesn't require a human to click anything; it exploits the way AI systems read and process information.

What should boards do about deepfake fraud?

Boards should ensure financial controls no longer rely on voice or video as authentication. Any process where a verbal or video instruction from a senior leader can authorise a significant transaction needs to be reviewed. Staff who handle financial transfers should be trained to verify through a second, independent channel — ideally by calling a known number directly, not one provided in the original communication.

Does good AI governance require technical expertise at board level?

No. Effective board-level AI governance is about asking the right questions, maintaining visibility over what AI is in use across the organisation, setting a clear risk appetite, and ensuring management is accountable for the answers. Technical depth sits with management. Governance sits with the board.

What's the difference between AI governance and AI security?

AI security focuses on the technical controls that protect AI systems — access management, monitoring, patching. AI governance is the broader framework: policies, accountability, risk appetite, oversight, and strategic alignment. Both matter, but they are different responsibilities. Security is largely a management function. Governance is a board function.

What should an AI policy include?

At a minimum, a policy should include tools approved for use, what data can and cannot be entered into AI tools, how outputs must be verified before use, what approval process exists for new tools, and how the organisation will monitor compliance. The policy needs to be communicated actively and not just published, and managers need to be equipped to have the conversation with their teams.

Meet the experts

Helen van Orton

Helen van Orton is the founder of Directorly and one of New Zealand’s leading voices at the intersection of governance and artificial intelligence. Through Directorly, she works with boards and executive teams across New Zealand and internationally to build AI fluency and strengthen governance capability, including through her senior leaders programme, The AI Empowered Leader. She is a current director of Co-operative Bank, Co-operative Life, and Centrix Group, with a governance career spanning former chair roles at the HR Institute of New Zealand and Answers Services Ltd, and directorships across financial services, retail, and technology sectors. Her executive career built deep experience in digital transformation, marketing, and customer experience. She is a graduate of the IoD’s Advanced Director Programme, a mentor in the Te Ara Tāwhaki governance programme, leads BoardPro’s AI Masterclass, and speaks regularly for BoardPro alongside Alexie on AI and governance. She is a contributor to BoardRoom magazine and the IoD and Govern 365 podcasts.

 

Alexie O'Brien

Alexie O'Brien is the Director of Leadership Academy.ai and works at the intersection of AI, leadership, and board effectiveness. A former retail and financial services executive with senior roles at lululemon and Rip Curl, she brings practical commercial experience to her governance and advisory work. Alexie is a current board director, a Graduate of the Australian Institute of Company Directors, and holds qualifications in Commerce and professional certifications in coaching, DISC, and Emergenetics profiling.
Through her consultancy and Leadership Academy.ai, Alexie works with boards and executive teams across Australia, New Zealand, and internationally to build AI fluency, strengthen governance capability, and navigate the practical challenges of responsible AI adoption. She specialises in helping organisations move from AI curiosity to confident, governed use.

Resources for your board

Take your next steps by downloading the free AI Policy Template, or watching the on-demand webinar 'Unseen and unchecked: The real AI risks facing your board'. You can also visit BoardPro, sign up for a free trial, and learn more about how board management software enables you to run refreshingly simple board meetings.

CTA icon (3)
Free AI Policy Template
A simple, adaptable policy template
Download
Helen-van-Orton
Free webinar on the risks of AI
Learn from the experts
Watch now
bp-logo-whitebg-1
Sign up for a free trial of BoardPro
Run more effective board meetings
Try for free

Get your CEO report guide and template

Get Email Notifications