Extra Q&A Session
Practical use cases
for AI in governance
Your questions answered...
Your Questions Answered
We often receive more questions than we can answer during our webinars. To address this, we host a follow-up session with our expert panel to unpack and answer these questions, then share the responses with you, our audience. Watch the recording below, read the transcript, or watch the full webinar here.
Your Expert Panel
.png?width=600&height=600&name=Simon%20Fawkes%20(4).png)
Simon Fawkes
Simon Fawkes is a specialist in AI transformation, education and strategy, who helps organisations to identify the opportunities and risks of AI implementation.
With more than 20 years of business coaching experience, including strategy and marketing for hi-tech exporters, Simon helps leaders to develop new innovations and deliver strategic objectives through the use of AI. A recognised norm challenger, Simon has been praised for his ability to stretch people's thinking and has assisted organisations in realizing the potential of AI to solve their core business problems. His teaching pedagogy is practical and hands-on, which helps the leaders grasp the concept of using AI in the right way to support their strategies and develop future-ready plans.

Alexie O'Brien GAICD
Alexie O’Brien is a seasoned Chair, Non-Executive Director, and AI Transformation Expert with a passion for integrating AI to enhance governance, productivity, and innovation.
As the founder of Leadership Academy.AI, Alexie helps boards and executive teams leverage AI to drive better decision-making and operational efficiency. Her leadership in AI stems from pioneering efforts in organisational transformation and executive coaching, and she continues to be a thought leader in using AI to unlock human potential and build future-ready organisations.
.png?width=600&height=600&name=Helen%20Van%20Orton%20(4).png)
Helen van Orton
Helen is a seasoned Board Director, Entrepreneur, and AI expert with extensive boardroom experience, chairing multiple boards, including Answer Services Ltd, and serving on boards including The Co-operative Bank Ltd and Centrix Group Ltd.
As the owner of Directorly, a consulting business that helps boards and executive teams optimise their Boardroom performance, Helen specialises in integrating AI to enhance governance and decision-making. Her leadership in AI began with pioneering digital humans and RPA initiatives in her Exec days at Vodafone, and she continues to be a thought leader in using AI to drive boardroom innovation and operational efficiency.
Q & A with our experts—Questions answered
ANSWER: Boards are increasingly recognising the importance of understanding AI, yet many have not formally discussed it at the board level. Studies by Forbes and Deloitte indicate that while 83% of companies see AI as a priority, nearly half have not addressed it in board meetings.
To approach AI strategically, boards should consider frameworks from institutions like the Institute of Directors in Australia, New Zealand, and the US. Instead of focusing on specific tools, boards must take a governance-first approach, ensuring they have the right strategic oversight and capability.
Additionally, cross-functional involvement from finance, risk, and IT teams is crucial. Boards can also leverage established readiness frameworks, such as Gartner’s AI Maturity Model and IBM’s AI Ladder, to guide AI discussions and strategy.
ANSWER: This webinar has provided an AI policy framework as a resource. It was developed through detailed and complex prompts rather than a simple AI-generated request. The framework is intended to be reviewed, adapted, and customised to fit an organisation's specific needs, ensuring buy-in at all levels.
Additional AI governance frameworks worth considering include:
- The Australian Government's AI Ethics Principles
- Singapore’s AI Verify and AI Governance Policy, which balances governance with innovation
- OECD’s AI Policy Observatory
- NIST’s AI Risk Management Framework
A cross-functional approach is essential—beyond compliance, legal, and IT, involving operational roles to ensure AI is effectively embedded across the organisation.
For organisations looking to develop a customised AI policy, Notebook LLM is recommended. It allows companies to input their specific data, alongside existing frameworks, to generate a tailored policy. However, reviewing and verifying the AI-generated output is crucial to ensuring accuracy and alignment with organisational goals.
ANSWER: Data privacy and security are key concerns in AI governance. Boards should ensure AI governance charters align with regulatory requirements, covering data retention limits, sharing protocols, accountability structures, encryption, and access controls.
For financial data security, organisations must use compliance-ready AI platforms such as Azure or AWS and ensure providers meet standards like ISO 27001. Financial institutions must also comply with regulatory guidelines (e.g., APRA). Additional security solutions, such as Layer X and Zscaler, can enhance AI oversight, providing visibility into AI usage, monitoring data inputs, and sandboxing environments.
AI systems inherently generate digital footprints, including data inputs, decision-making logs, and training data sets. Organisations must establish strong governance frameworks to manage these digital records and ensure compliance. Guidance from the OAIC stresses restricting sensitive data inputs and training teams on AI policies and acceptable use.
Boards should prioritise auditability in AI governance. A PwC AI legal report highlights key considerations for boards, emphasising the need for thorough research and understanding to ensure proper oversight and compliance.
ANSWER: The board's role is not to evaluate AI tools—that is the responsibility of management. Instead, the board should focus on governance by asking the right strategic questions, such as:
- What is the return on investment (ROI)?
- What are the opportunities and risks?
- How can AI be effectively deployed within the organisation?
- Is there a cultural change program in place to support implementation?
While the board can approve management’s recommendations, its primary role is oversight, not selecting specific tools.
ANSWER: The nature of work is constantly evolving, and AI is accelerating these changes. While certain tasks traditionally done by junior professionals are being automated, the focus should be on AI-assisted roles rather than full replacement.
A historical perspective shows that just as computers transformed work decades ago, AI is reshaping fields like law and accounting today. For example, in the U.S., the OSCAR legal LLM framework allows junior lawyers to quickly access case law, shifting their focus from manual research to critical thinking and application.
Similarly, in finance and accounting, AI is removing repetitive tasks, enabling junior professionals to develop analytical and strategic skills. The workplace structure will continue evolving, but integrating AI should prioritise learning, mentorship, and skill development rather than eliminating foundational roles.
ANSWER: At the board level, it is crucial to document baseline operational metrics such as costs, productivity, error rates, and revenue streams. Boards should guide management in pilot-testing AI initiatives on a small scale to measure their impact and extrapolate return on investment (ROI).
Management should provide regular updates, such as quarterly or biannual reports, detailing AI’s influence on cost savings, revenue growth, productivity, and innovation. Non-financial metrics, including customer satisfaction, improved insights, risk mitigation, and new revenue channels, should also be considered.
However, successful AI adoption requires more than just implementation—it must be supported by cultural change within the organisation. Without the right mindset and adoption strategies, AI projects may fail to deliver their full potential.
ANSWER: AI can generate inaccurate information, known as hallucinations, because it prioritises providing answers over correctness. Human oversight is essential to verify AI-generated content. Microsoft's "Copilot" branding reinforces that AI should assist rather than replace human decision-making.
Shadow AI occurs when employees use unauthorized AI tools without organisational approval. While this can boost productivity, it also risks exposing sensitive information and bypassing established risk and ethical guidelines. Boards must address this by ensuring oversight and governance of AI usage.
Regulatory compliance is increasingly important, with regulators like New Zealand’s Privacy Commissioner focusing on AI explainability. Boards must understand legal frameworks such as GDPR (if applicable to your region) and ensure AI decisions are transparent and unbiased. Collaborations, such as working with academic institutions, can help ensure compliance.
A clear, adaptable AI policy is crucial. Organisations should maintain a living document that guides AI usage and aligns with their specific needs. Webinar resources include an AI policy framework as a starting point for organisations to tailor to their requirements.