AI in board governance whitepaper: a practical guide for directors
The AI-enabled board: Leveraging secure AI to transform governance, improve productivity, and reduce the cost of board operations
For directors, chairs, and executives navigating the real risks and opportunities of AI in governance
Get your free copy
Executive summary
AI is already in your boardroom. The problem is, it’s not being governed.
Right now, directors are using AI tools with your board papers. Yet most boards have no policy on AI use, no visibility into the tools being used, and no framework for managing risk.
This is not a technology problem. It is a governance problem.
While reports show that the majority of directors already use AI tools such as ChatGPT, Copilot, and Gemini for board work, few have a policy governing that use. This matters because board documents are not ordinary workplace content; they include strategic plans, legal advice, executive performance data, and commercially confidential financials.
And while privacy regulators in Australia and New Zealand have issued explicit guidance against entering sensitive information into publicly available AI tools, it continues to happen.
This AI whitepaper covers:
- How boards need visibility, policy, and assurance over the use of AI across their organisation, by everyone, including directors.
- How, when used with discipline, AI can strengthen governance practice, with the ability not only to summarise information, but also to interrogate complex board material, surface patterns across time, and support higher-quality challenges.
- How the ideal tool embeds AI within a controlled, purpose-built governance environment with clear data handling, permissions, and auditability.
Download the whitepaper and learn the practical outcomes of working with AI — including which AI tools are approved for board-level content, the risks, and safeguards for their use.
What you and your board will learn in this AI whitepaper
AI governance is now board work
AI is no longer an operational issue. Boards need visibility, policy, and assurance over how AI is being used, including by directors themselves.
The real opportunity goes beyond summaries
Summarisation is just the starting point; AI doesn’t replace judgement — it sharpens it. The real value of AI lies in its ability to:
- Interrogate complex board material
- Surface patterns over time
- Strengthen challenge and decision-making
Not All AI Is Equal
There is a critical difference between the ad hoc use of public AI tools, and AI embedded within a secure, governance-focused environment. The risk profile is not the same. And it’s essential for boards to understand the distinction.
AI Governance Is Now Board Work
AI is no longer an operational issue.
Boards need visibility, policy, and assurance over how AI is being used, including by directors themselves.
The Real Opportunity Goes Beyond Summaries
Summarisation is just the starting point.
The real value of AI lies in its ability to:
- Interrogate complex board material
- Surface patterns over time
- Strengthen challenge and decision-making
Used well, AI doesn’t replace judgement.
It sharpens it.
Not All AI Is Equal
There is a critical difference between:
- Ad hoc use of public AI tools
- AI embedded within a secure, governance-focused environment
The risk profile is not the same.
And boards need to understand that distinction.
What does the AI whitepaper include?
In this whitepaper, you’ll find:
- Research from professional bodies on the real-world use of AI in organisations
- Questions to ask your board and promote discussion around AI use and risk
- Practical use cases to demonstrate the ‘how’
Why does AI governance matter?
Board materials are among the most sensitive information in any organisation. This material includes:
- Strategic plans
- Legal advice
- Executive performance data
- Commercially sensitive financials
Privacy regulators across Australia and New Zealand have already warned against entering sensitive information into public AI tools.
And yet, it’s happening informally, invisibly, and without control.
This whitepaper will provide guidelines for the use of AI by directors, chairs, executives and other governance professionals.
Meet the experts
Helen van Orton
Helen van Orton is the founder of Directorly and one of New Zealand’s leading voices at the intersection of governance and artificial intelligence. Through Directorly, she works with boards and executive teams across New Zealand and internationally to build AI fluency and strengthen governance capability, including through her senior leaders programme, The AI Empowered Leader. She is a current director of Co-operative Bank, Co-operative Life, and Centrix Group, with a governance career spanning former chair roles at the HR Institute of New Zealand and Answers Services Ltd, and directorships across financial services, retail, and technology sectors. Her executive career built deep experience in digital transformation, marketing, and customer experience. She is a graduate of the IoD’s Advanced Director Programme, a mentor in the Te Ara Tāwhaki governance programme, leads BoardPro’s AI Masterclass, and speaks regularly for BoardPro alongside Alexie on AI and governance. She is a contributor to BoardRoom magazine and the IoD and Govern 365 podcasts.
Alexie O’Brien
Alexie O'Brien is the Director of Leadership Academy.ai and works at the intersection of AI, leadership, and board effectiveness. A former retail and financial services executive with senior roles at lululemon and Rip Curl, she brings practical commercial experience to her governance and advisory work. Alexie is a current board director, a Graduate of the Australian Institute of Company Directors, and holds qualifications in Commerce and professional certifications in coaching, DISC, and Emergenetics profiling.
Through her consultancy and Leadership Academy.ai, Alexie works with boards and executive teams across Australia, New Zealand, and internationally to build AI fluency, strengthen governance capability, and navigate the practical challenges of responsible AI adoption. She specialises in helping organisations move from AI curiosity to confident, governed use.

