AI Governance for SMEs and Non‑Profits in New Zealand
When AI Goes Wrong: A Lesson from Deloitte
Earlier this year, Deloitte was forced to partially refund the Australian Government after delivering a $440,000 report that relied heavily on generative AI (see Deloitte to refund government, admits using AI in $440k report, Financial Review Oct 5, 2025 – 7.41pm). The report contained errors, fabricated citations, and even a fake court judgment. The fallout was swift: headlines, public scrutiny, and reputational damage for one of the world’s largest consultancies.
If a global firm with vast resources can stumble this badly, what does that mean for SMEs and non‑profits in Aotearoa/New Zealand who are already using AI — often without realising it?
From Xero’s invoice automation to Canva’s design suggestions and Microsoft 365’s Copilot features, AI is already embedded in the tools you rely on. The difference is whether your organisation has AI governance in place to use it responsibly.
Why AI Governance Matters for Small Organisations
For SMEs and non‑profits, AI governance isn’t about slowing down innovation — it’s about protecting your people, your clients, and your mission. Unlike large corporates with compliance teams, smaller organisations often rely on lean staff and limited resources. That makes it even more important to have responsible AI governance in place.
Strong governance ensures:
- Privacy protection under the NZ Privacy Act 2020
- Audit‑ready policies that meet compliance and sector quality standards.
- Equity and cultural safety, so AI doesn’t reinforce bias or undermine trust.
- Defensible decision‑making, where outputs can be explained and justified.
Common Risks: Privacy, Compliance, Reputation
AI can deliver efficiency and insight, but without governance it also creates risks:
- Privacy breaches if sensitive data is entered into public tools.
- Compliance gaps if outputs don’t meet legal and quality standards.
- Reputational damage if AI‑generated outputs are inaccurate, biased, or misleading.
As we saw with the Deloitte case – when AI is used without checks, the fallout isn’t just technical — it damages public trust and organisational credibility.
Shadow AI: The Hidden Risk in Your Workplace
Even if your organisation hasn’t formally adopted AI, chances are your staff are already using it. This is known as Shadow AI — the unsanctioned use of tools like ChatGPT or Gemini without oversight.
Shadow AI happens because staff want to get their work done, but it creates blind spots for IT and leadership in our organisation. Without policies, you can’t control what data is shared, how outputs are used, or whether your compliance obligations are being met.
Practical First Steps for Boards and Managers
- Map usage — Identify where AI is already operating in your organisation.
- Set boundaries — Define what data can and cannot be entered into AI tools.
- Provide safe alternatives — Offer approved, customised AI tools that meet compliance standards.
- Educate staff — Train teams on risks, responsibilities, and defensible use.
- Monitor – Ensure systems are in place to monitor AI use and ongoing alignment of tools to operational needs
- Review regularly — Governance is not “set and forget” — it must evolve with the tools.
Building Defensible AI Policies in Aotearoa/NZ
AI governance is not just about risk management — it’s a trust signal. Funders, regulators, and communities want to know that your organisation is using AI responsibly. Policies and training to guide use are key.
At The Policy Place we help SMEs and non‑profits across Aotearoa/New Zealand move from “unaware AI use” to responsible, defensible AI governance.

Call us now