Artificial Intelligence (AI)
AI Governance for Australian Nonprofits: Privacy, Risk & Compliance Guide
AI is rapidly reshaping Australia’s social and community services. Tools for case note summarisation, triage, transcription and safeguarding alerts are increasingly used by frontline teams.
But these benefits come with serious risks like privacy breaches, bias, inaccurate inferences, poor transparency and the potential for harm in sensitive service areas. As a result, AI governance is now a core leadership and board responsibility.
This guide explains current Australian expectations, risk considerations, and practical governance steps for NGOs, NFPs and community health providers.
Why AI Governance Matters for Australian NGOs (Regulations & Risks)
Australian regulators have strengthened expectations around AI use across both public agencies and funded social services. Key obligations include:
- The Privacy Act 1988 applies to all personal information used or created by AI – including inferred and inaccurate (“hallucinated”) data.
- OAIC guidance warns organisations not to input personal or sensitive information into public AI tools like ChatGPT or Gemini.
-
Government AI guidelines (including procurement rules) now influence requirements for funded NGOs, even if not legally mandated.
- States like Queensland require structured AI governance, transparency and documentation.
- Trauma‑informed practice, cultural safety, and frameworks like MARAM cannot be automated or delegated to AI systems.
Practice Requirements
- Trauma‑informed practice, cultural safety, and frameworks like MARAM cannot be automated.
- AI must never replace professional judgment in high‑risk or complex client scenarios (family violence, mental health, child safety, disability).
For organisations handling highly sensitive data, these safeguards are essential.
Common Types of AI Used in Australian Community Services (and Related Risks)
1. Productivity, Case Notes & Documentation Tools
Frontline staff commonly use AI to summarise notes, generate letters or draft reports.
- Microsoft 365 Copilot: Ensure correct tenant configuration, data residency and governance.
- ChatGPT & Gemini (public versions): High risk due to data transfer to external servers—no identifiable information should ever be entered (check OAC guidance)
- Transcription apps (Otter.ai, Fireflies, Read.ai): Check consent requirements, recording laws and overseas storage.
2. Client & Case Management Systems
Platforms like Lumary, SupportAbility and CareMaster increasingly embed rules‑based and machine‑learning features (eg predictive rostering, pattern detection).
Governance requirement: Treat embedded automation as AI, especially when influencing client outcomes or service decisions.
3. Intake, Triage & Crisis Navigation Tools
Some NGOs are using AI for crisis navigation, service triage and call summarisation.
Risks:
- People may not know AI is collecting or processing their information.
- AI must never replace practitioner-led risk assessment in family violence or mental health contexts.
4. Safeguarding, Incident Monitoring & Pattern Detection
Emerging AI systems detect crisis escalation, repeated contacts, anomalies or risk patterns.
Governance implications:
- These uses must meet principles of fairness, accountability and contestability.
- Boards must require human oversight, explainability and escalation pathways.
5. Internal Knowledge Assistants & Policy Tools
Lower risk tools that summarise policies, guide staff to procedures or assist with compliance.
Still requires:
- Role‑based access
- Documentation
- Privacy impact assessments where personal information is involved
Legal, Ethical & Sector Requirements
Privacy & Data Protection
NGOs must:
- Address potential AI bias and impacts on marginalised communities
- Respect Indigenous Data Sovereignty principles
- Disclose AI use in privacy policies when it:
- influences service decisions
- collects or processes personal information
- generates inferred client data.
Sector-Specific Obligations
Particularly for family violence, youth services, mental health, disability and addictions:
- Trauma‑informed practice must guide all AI-supported activities
- MARAM and clinical governance frameworks cannot be automated
- Human review is mandatory for all decisions that affect client safety and wellbeing.
Board Responsibilities and Governance Checklist
1. Oversight
- Maintain an AI register.
- Require Privacy Impact Assessments (PIAs) and AI Impact Assessments
- Approve procurement standards for AI‑enabled tools
2. Safety & Ethics
- Define decisions that must remain human-led (risk assessment, clinical decisions).
- Ensure AI use supports trauma‑informed, client‑centred practice.
3. Cultural Safety & Equity
- Respect First Nations data governance principles.
- Recognise the prevalence and impacts of bias in AI tools
- Ensure cultural safety in policies and practices for Responsible AI use
4. Risk & Documentation
- Integrate AI into your Risk Register.
- Maintain records of your decisions, training and approvals.
- Ensure role‑based access and data controls.
5. Transparency
- Update privacy policies.
- Inform service users when AI is used in their data processing.
Summary: For Leaders and Boards
- AI use triggers significant privacy obligations.
- Boards – not IT – hold governance responsibility.
- Sensitive data must never enter public AI tools.
- AI must not replace practitioner risk assessment.
- Cultural Safety is a core requirement of responsible AI use.
- Transparency and human oversight are non‑negotiable.
Frequently Asked Questions
Can Australian NGOs use ChatGPT for case notes?
Usually no. Personal, sensitive or identifiable client information must not be entered into any publicly accessible AI tool, including ChatGPT, Gemini, low‑cost AI bots, or consumer‑tier versions of Copilot.
Case notes should always be:
- written contemporaneously with the event
- specific to the client and context
- based on the practitioner’s professional judgement
- defensible and able to be explained by the writer if reviewed
The OAIC explicitly warns organisations not to input identifiable information into public AI tools, as these systems may store, transmit or reuse data outside your control. Because case notes contain highly sensitive client information, they cannot be safely or lawfully created, summarised or drafted using public AI applications.
Secure, organisation‑approved tools with correct governance, data residency and access controls must be used instead.
Do NGOs need AI Impact Assessments?
Yes—expectations for public agencies now flow directly to funded NGOs.
Assessments help identify and manage risk and demonstrate responsible AI use.
What AI decisions must always remain human-led?
Family violence risk assessment, clinical judgment, safety planning and any decisions affecting a person’s wellbeing.
Are transcription apps safe for sensitive meetings?
Only with explicit informed consent, lawful recording and secure data storage.
Responsible AI Governance in NZ & Australia: A Board-Ready Guide
Artificial intelligence (AI) is no longer a futuristic concept. It is embedded in everyday operations across health, disability, Māori/iwi, community and creative sectors — from automated scheduling tools to generative platforms like ChatGPT and Copilot. For boards and managers, this shift brings both opportunity and risk. The challenge is clear: how do we harness AI responsibly while ensuring compliance, equity, and defensibility?
In Aotearoa New Zealand and Australia, the governance of AI is rapidly becoming a board-level priority. Regulators, funders, and communities expect organisations to demonstrate not only efficiency but also accountability. For New Zealand boards, this means aligning AI use with the Privacy Act, Te Tiriti o Waitangi obligations, and Māori data sovereignty principles. In Australia, boards must consider the AI Ethics Principles, NDIS and other health and human service compliance requirements, and emerging state-level procurement signals. Across both jurisdictions, the expectation is the same: AI governance must be practical, transparent, and audit-ready.
Why AI Governance Matters
AI tools promise efficiency gains, cost savings, and new ways of engaging with clients. Yet without governance, they also introduce significant risks:
- Compliance breaches: Using AI without clear policies can expose organisations to regulatory violations and penalties in areas like privacy, discrimination and copyright.
- Reputational harm: A poorly managed AI incident can erode trust with funders, clients, and communities.
- Cybersecurity vulnerabilities: AI platforms often rely on cloud-based data processing, creating new attack surfaces for malicious actors.
Boards cannot afford to treat AI as a technical issue delegated to IT teams or worse, to staff self-management. Governance is about oversight, accountability, and ensuring that every tool deployed aligns with organisational values and legal obligations. Responsible AI governance is not optional — it is a strategic necessity.
The Legal and Regulatory Landscape
Aotearoa/New Zealand
Boards must ensure AI use complies with the Privacy Act 2020, which sets clear expectations around data collection, storage, and use. Te Tiriti o Waitangi obligations require respect and protection for rangatiratanga, iwi/Māori rights and active steps to address historical and ongoing impacts of bias. The principles of Te Mana Raraunga emphasise that Māori data is a taonga, requiring governance frameworks to respect tikanga and intergenerational inclusion.
Australia
The Australian Government has released AI Ethics Principles, encouraging organisations to adopt fairness, transparency, and accountability in AI deployment. For disability providers, NDIS compliance adds another layer of responsibility, requiring defensible policies that protect vulnerable clients. In Queensland, procurement signals increasingly favour organisations that can demonstrate responsible AI governance as part of their compliance frameworks.
Boards operating across both jurisdictions must recognise that AI governance is not just about technology — it is about embedding defensibility, dignity, and cultural safety into every decision.
Board Responsibilities in AI Governance
Boards play a critical role in setting the tone for responsible AI use. Their responsibilities include:
- Oversight: Ensuring risk registers and vendor matrices are in place to track AI tools and their compliance obligations.
- Accountability: Embedding equity, dignity, and cultural safety into AI policies and practices.
- Practical tools: Approving compliance checklists, defensibility matrices, and audit-ready frameworks that staff can apply in daily operations.
By taking ownership of AI governance, boards signal to funders, regulators, and communities that their organisation is prepared, proactive, and principled.
Operationalising AI Governance
Responsible AI governance requires operational tools that staff and boards can apply daily. The following tools help make governance defensible and practical:
- Tool-by-tool compliance matrices: Each AI platform (ChatGPT, Gemini, Copilot, Otter, Fireflies, Read.ai) should be assessed against NZ and Australian legal requirements. For example, ChatGPT may raise privacy concerns under Privacy legislation in both countries, while Copilot’s integration with Microsoft tools requires careful vendor risk evaluation.
- Risk registers: Boards should maintain a live register of AI risks, to assess, monitor and help manage risk categories of data privacy, cybersecurity, reputational impact, and bias/equity. (For risks check out Policies in the Age of Hallucinations).
- Vendor matrices: Potential AI provider should be evaluated against organisations’ compliance obligations, transparency of data use, and cultural safety standards.
- Policy & Procedure Oversight: All organisations need policies to guide safe and responsible use of AI. Boards don’t need to write them but need to ensure that all staff have access to good policy advice that is regularly reviewed and updated as AI use in an organisation evolves. Training as well as user-friendly resources like charts and forms, will help staff understand and adhere to the policy guidance.
Embedding Māori Governance Principles
For New Zealand organisations, AI governance must reflect Te Tiriti o Waitangi and Māori data sovereignty. Governance must therefore incorporate human monitoring and oversight of AI outputs for western bias and alignment with Te Mana Raraunga data principles. In this way AI governance will not only be legally defensible but also culturally grounded and equitable.
Five Steps to Audit-Ready AI Governance
Boards can adopt a simple, repeatable framework:
- Identify AI tools in use across the organisation.
- Map risks and compliance obligations for each tool.
- Develop defensible policies and registers aligned with NZ and Australian standards.
- Train staff/kaimahi with accessible materials that translate complex compliance into plain language.
- Review and refine regularly, ensuring board oversight and audit readiness.
This framework will ensure that governance is proactive, practical, and defensible.
Case Study: Disability Service in Practice
A disability service in Aotearoa New Zealand introduced AI scheduling tools to manage staff rosters. Initially, the tool raised privacy concerns around client data. By applying a risk register, vendor matrix, and staff training, the board ensured compliance with the NZ Privacy Act 2020 and embedded cultural safety principles. The outcome was improved efficiency, reduced audit risk, and stronger trust with funders and clients.
Conclusion & Next Steps
AI governance is no longer optional. Boards in New Zealand and Australia must lead with defensibility, dignity, and cultural safety. By adopting practical tools — risk registers, vendor matrices, and training resources — organisations can harness AI responsibly while meeting compliance obligations.
Next Steps:
- Download our AI Governance Checklist to get started.
- Book your free consultation with The Policy Place to discuss your AI governance and other policy needs.
Policies in the age of AI Hallucinations
In our everyday life, we wouldn’t rely on a person for advice who is known to hallucinate from time to time in their advice-giving. For the same reason, we cannot solely rely on generative AI tools for policy advice and development.
In this post, we focus particularly on the AI risk of “hallucination” and error and how best to manage these risks.
Hallucinations
Hallucinations are a well-known risk of using generative AI. They occur when an AI model makes up facts to respond to a prompt. They reflect that AI models are predictive systems designed to produce the most probable and plausible answer, not necessarily the most accurate or truthful answer.
It can be hard to identify an AI hallucination because they are typically framed in a convincing way.
Why are AI outputs so convincing when they are wrong?
I asked ChatGPT this question.
In its own words, the chatbot explained that “it was trained to sound convincing, not to be right.” In other words, the chatbot’s hallucinating is due to its training that a confident answer is more likely to be viewed as helpful than a hesitant answer; likewise that an answer that has the indicators of expertise (like tone and terms) is more likely to be seen as credible and reliable. AI has learned and reflects the shape and appearance of expertise without necessarily having the expertise itself.
This is quite a different scenario from how AI is sold – see, for example, the description of ChatGPT5 by the CEO of OpenAI as like having a “team of Ph.D. level experts in your pocket.” (NBC News Aug. 8, 2025)
AI at The Policy Place
At the Policy Place we use AI to assist our policy development and review work. We treat it like a junior policy assistant who can help us with a range of tasks like initial drafts, summaries etc We use other sources too like legislation, regulations, government websites, academic research and court and tribunal decisions for the development, reviews and updating of policies and procedures.
We have previously posted about the highly publicised Deloitte case where AI-generated citations used in a report for the Australian Government were found to be wrong and included fictitious citations. There have also been a number of legal cases reported overseas of AI used in cases and found to have produced fictional case citations and other inaccuracies. See here for a good list of Australian examples.
We understand how easily mistakes like this could happen. Unlike other sources we use, we find that checking AI outputs for hallucinations and errors is hugely time-consuming.
It is not only hard to spot hallucinations. When using AI, we have noticed that more data is generated by our prompting than if we did the whole task by hand. Sometimes, this is helpful and right on point. Other times, it can be completely superfluous and tie us needlessly up in checking and re-checking processes.
So we’re still a work in progress, striving for the productivity and efficiency gains of AI use while wanting to maintain our high standards for accuracy and quality in our policies.
Can AI check and verify?
If only we could rely on AI to do this. But we can’t.
At best we can ask AI to verify its outputs against its own training data. It cannot check and verify its outputs against sources like legislation, organisational documents, academic databases and expert reports. It cannot assess the truth or veracity of something.
With RAG – Retrieval-Augmented Generation – things are better. Hallucination risks are significantly reduced because AI answers are grounded in authorised content. AI outputs are also more consistent. But the truth and reliability of AI outputs depends on the authorised content/data.
Thinking about AI for policies?
If you’re thinking about using AI for your policies, think beyond the promises and “sell” of AI. Ensure you have the expertise and knowledge to check the AI outputs for quality, accuracy and hallucinations. Be pro-active about managing the risks of hallucinations and errors and ensure you have good policy guidelines for effective governance and management of AI.
Wanting to outsource your policies and procedures and the assurance of relevant policy expertise? Contact us NOW at The Policy Place.

Call us now
AI Governance for SMEs and Non‑Profits in New Zealand
When AI Goes Wrong: A Lesson from Deloitte
Earlier this year, Deloitte was forced to partially refund the Australian Government after delivering a $440,000 report that relied heavily on generative AI (see Deloitte to refund government, admits using AI in $440k report, Financial Review Oct 5, 2025 – 7.41pm). The report contained errors, fabricated citations, and even a fake court judgment. The fallout was swift: headlines, public scrutiny, and reputational damage for one of the world’s largest consultancies.
If a global firm with vast resources can stumble this badly, what does that mean for SMEs and non‑profits in Aotearoa/New Zealand who are already using AI — often without realising it?
From Xero’s invoice automation to Canva’s design suggestions and Microsoft 365’s Copilot features, AI is already embedded in the tools you rely on. The difference is whether your organisation has AI governance in place to use it responsibly.
Why AI Governance Matters for Small Organisations
For SMEs and non‑profits, AI governance isn’t about slowing down innovation — it’s about protecting your people, your clients, and your mission. Unlike large corporates with compliance teams, smaller organisations often rely on lean staff and limited resources. That makes it even more important to have responsible AI governance in place.
Strong governance ensures:
- Privacy protection under the NZ Privacy Act 2020
- Audit‑ready policies that meet compliance and sector quality standards.
- Equity and cultural safety, so AI doesn’t reinforce bias or undermine trust.
- Defensible decision‑making, where outputs can be explained and justified.
Common Risks: Privacy, Compliance, Reputation
AI can deliver efficiency and insight, but without governance it also creates risks:
- Privacy breaches if sensitive data is entered into public tools.
- Compliance gaps if outputs don’t meet legal and quality standards.
- Reputational damage if AI‑generated outputs are inaccurate, biased, or misleading.
As we saw with the Deloitte case – when AI is used without checks, the fallout isn’t just technical — it damages public trust and organisational credibility.
Shadow AI: The Hidden Risk in Your Workplace
Even if your organisation hasn’t formally adopted AI, chances are your staff are already using it. This is known as Shadow AI — the unsanctioned use of tools like ChatGPT or Gemini without oversight.
Shadow AI happens because staff want to get their work done, but it creates blind spots for IT and leadership in our organisation. Without policies, you can’t control what data is shared, how outputs are used, or whether your compliance obligations are being met.
Practical First Steps for Boards and Managers
- Map usage — Identify where AI is already operating in your organisation.
- Set boundaries — Define what data can and cannot be entered into AI tools.
- Provide safe alternatives — Offer approved, customised AI tools that meet compliance standards.
- Educate staff — Train teams on risks, responsibilities, and defensible use.
- Monitor – Ensure systems are in place to monitor AI use and ongoing alignment of tools to operational needs
- Review regularly — Governance is not “set and forget” — it must evolve with the tools.
Building Defensible AI Policies in Aotearoa/NZ
AI governance is not just about risk management — it’s a trust signal. Funders, regulators, and communities want to know that your organisation is using AI responsibly. Policies and training to guide use are key.
At The Policy Place we help SMEs and non‑profits across Aotearoa/New Zealand move from “unaware AI use” to responsible, defensible AI governance.

Call us now
DIY vs Expert: Why Audit‑Ready Policies Outperform AI Templates The Policy Place
When times are tough, it’s tempting for SMEs, trusts, volunteer boards, and operational managers to draft their own governance policies — using online templates or free AI tools. But when it comes to audit‑ready, defensible policies, shortcuts can cost more than they save.
At The Policy Place, we bridge the gap between good intentions and best practice. Here’s why engaging us isn’t just helpful — it’s essential.
🚫 Why DIY Policies Put Your Organisation at Risk
DIY policies often miss critical compliance nuances — especially audit requirements under NZ standards like NZS 8134:2021, SSAS levels 1–4, HealthCERT, HDC, and CHP. Common risks include:
- ❌ Missed legal obligations that leave you exposed (eg see
- ❌ Generic templates that don’t reflect your kaupapa or community
- ❌ Audit failures due to vague clauses or incomplete frameworks
- ❌ Equity blind spots, especially around Te Tiriti o Waitangi and cultural safety
DIY may feel cost‑effective, but it can undermine credibility, funding, and trust.
✅ The Policy Place Advantage: Audit‑Ready Policies
We don’t just write policies — we craft audit‑ready, sector‑aligned frameworks that hold up under scrutiny. Our work provides:
- Legally Sound Policies – aligned with NZ law and sector‑specific regulations.
- Defensible & Updated Policies – designed to withstand audits and board review.
- Follow‑Up Support – helping you address audit findings and embed improvements.
- Operationally Usable Guidance – clear, accessible, and actionable policies for frontline staff.
Policies and procedures are our superpower — we’re doing them every day, across multiple sectors.
🤖 AI Is a Tool — Not a Substitute
AI has benefits. It can:
- Speed up drafting
- Test readability
- Generate ideas quickly
But AI is not a substitute for expertise. At its core, AI is a system built on algorithms — a very fast recipe‑follower. It can mix ingredients and present something that looks like a finished dish.
What AI cannot do is:
- Understand your organisation, obligations, and risks
- Embed Te Tiriti principles or cultural safety
- Balance compliance, privacy, and equity
- Anticipate the scrutiny of auditors and funders
That’s why relying on AI alone for governance policies is risky. (eg See
🧩 Responsible AI Use: How We Support Safe Practice
At The Policy Place, we use AI responsibly — as one tool among many. We combine AI’s efficiency with human expertise, legal research, and sector knowledge. Our process includes:
- Checking against legislation, Court decisions, Waitangi Tribunal decisions, and sector standards like the NZS 8413: 2021; SSAS levels 1-4
- Embedding ethical, equitable, and culturally safe practice
- Developing policies and procedures that guide safe AI use in your organisation
This ensures every policy we deliver is not just well‑written, but legally, ethically, and culturally defensible. (For more see (see How to Use AI for Writing Policies (Without Getting Burned).
🌱 Value That Goes Beyond Templates
When you engage The Policy Place, you’re not just buying a document. You’re investing in:
- Strategic Clarity – connecting board strategy to operational practice
- Sector Leadership – reflecting emerging best practice, not just minimum compliance
- Collaborative Support – working with you to embed policies and prepare for audits
- Ongoing Value – access to our online platform with regularly updated policy content
🗣️ Final Word: DIY Is Brave. But Sector‑Ready Is Smarter.
We respect the drive to do it yourself. But managers and board members tell us they want to focus on their people and communities — not toil away on policies that never stop needing updates.
👉 If you want to lighten your load and protect your organisation with audit‑ready, defensible governance policies, partner with The Policy Place Ltd.
How to Use AI for Writing Policies (Without Getting Burned)
Artificial Intelligence (AI) tools like ChatGPT and Gemini are changing the way we work — and policy writing is no exception. These tools can help you get started quickly, giving you draft content in seconds instead of days. But when it comes to policies and procedures for your business or organisation, speed alone isn’t enough.
At The Policy Place, we think AI can be useful — if you know how to use it safely. Here’s how to make the most of AI tools to draft policy content without putting your organisation at risk, and where expert support still matters.
Step 1: Use AI to Get a First Draft or Template
AI tools are great at producing a basic structure. You can ask something like:
“Write a policy on remote working for a small not-for-profit organisation in New Zealand.”
You’ll often get a reasonable starting point: a definition, purpose, roles, responsibilities, and maybe a few procedures. This can help overcome blank-page syndrome and give you something to work from.
Good use

-
Exploring structure and headings
-
Drafting general content
-
Brainstorming risks or responsibilities
BUT Warning
AI content is often vague, outdated, or based on generic international templates that don’t reflect NZ laws or your specific sector. Sometimes it’s wrong.
Step 2: Review the Content Critically
Just because something is well written doesn’t mean it’s accurate or compliant. You really have to review the content to check it is consistent with our law and othe regulatory criteria that applies to your organisation and that properly reflects your mission, values and purpose (kaupapa).
Remember, unless instructed, AI doesn’t know:
- Which New Zealand legislation applies to your organisation/business
- Your funding contracts or audit requirements
- Your operational needs
- Your kaupapa (mission, values, aims)
- Whether content is up to date with sector standards
That’s where the risks start. Many AI policies can look good but won’t hold up in an audit — or worse, if something goes wrong.
Step 3: Ask for Help to Make It Real
This is where we come in. At The Policy Place, we don’t throw out your AI-generated draft — we aim to get the most from AI to build faster and better policies for your organisation. We:
- Review and provide online policies to support compliance, clarity, and accuracy
- Enable you to tailor policies to your needs and legal obligations
- Check and monitor your online policies for quality and currency
- Update it as laws, contracts, and sector standards change
AI is a tool. It’s not an advisor, an auditor, have regulatory and quality expertise or the exertise of real-life management of diverse agencies.
Example: AI Draft vs Expert Review
AI version:
“All staff are expected to comply with data protection laws.”
Expert-reviewed version:
“Staff must comply with the Privacy Act 2020 and the Information Privacy Principles. The Privacy Officer is responsible for managing access requests, ensuring privacy training is completed annually, and reporting breaches to the Privacy Commissioner where required.”
Spot the difference? That’s the value of combining AI efficiency with real-world expertise.
Use AI — But Don’t Go It Alone
We encourage people to explore and use AI tools. They can be immensely helpful. They’re fast. But they don’t know your risks, obligations, or context. That’s why AI should be a starting point, not your final product.
If you want peace of mind knowing that your policies are developed, checked and reviewed against relevant standards by real lives humans with relevant expertise and experience, or if you want to build a system where AI and expert review work together, talk to us.
Ready to Future-Proof Your Policies?
Let’s work together to make your policies smart, practical, and compliant — with or without AI.
📩 Contact The Policy Place today — your policies are too important to leave to chance.
Why You Need a Comprehensive AI Policy
Artificial intelligence (AI) is becoming integral to many industries in Aotearoa, including social and health services. While AI offers benefits, it also poses significant risks that need to be addressed through comprehensive AI policies. That’s why we at the Policy Place have recently released our new AI policy for our online policy clients.
In this blog we consider the importance of having an AI policy in social and health service agencies, the risks of not having a policy and some of the key things to cover in an AI policy for community, social and health services. For our previous post on AI use in social and health services see here.
The Rise of AI in Workplaces
Artificial intelligence is no longer a futuristic concept; it is actively shaping how organisations operate.
The 2024 Work Trend Index Annual Report from Microsoft and LinkedIn released in May this year, found that AI is prevalent in the workplace worldwide. Key findings highlighted that AI use is pervasive in global workplaces and that AI use is beneficial in terms of time-saving, efficiency gains and adding to the enjoyment of work.
However, the Report also identified pervasive risk with AI use; that, in workplaces without an AI policy or other guidance 78% of employees had taken things into their own hands and were bringing and using their own AI tools at work.
The Risks of AI Use without AI policies and guidance include:
- Data Security Risks: AI systems can be vulnerable to cyber-attacks, which can lead to data breaches and loss of sensitive information. Without an AI policy, staff may input personal information and sensitive organisational data.
- Ethical and Legal Risks: AI use can lead to ethical dilemmas and legal issues, such as unauthorised use of personal data, breach of copyright and AI-driven decisions that are biased and breach human rights.
- Operational Risks: Relying on AI without proper oversight can lead to operational inefficiencies, errors, and potential harm to clients.
- Cultural Risks: AI data may not be sufficiently responsive to diverse cultural contexts and needs of different communities. Without proper AI policies and guidance, AI use risks undermining important cultural practices and values, particularly those protected by Te Tiriti o Waitangi.
The Importance of an AI Policy
An AI policy is basically the starter or minimum for a workplace to address some of these risks:
- Ensuring Ethical Use of AI: An AI policy helps ensure that AI tools are used ethically and responsibly. This is crucial in social, community and health services, where decisions made by AI can significantly impact individuals’ lives and well-being.
- Protecting Client Privacy: An AI policy guides how staff should use AI in alignment with the Privacy Act 2020 and privacy policies. This is particularly important for social, health and community services dealing with highly sensitive and confidential data.
- Maintaining Accountability: Clear guidelines within an AI policy guide staff on how they may use AI in their decisions and their duty of reasonable care. This is particularly important in health and social services, where transparency and trust are paramount.
- Preventing Discrimination: An AI policy will include checks that staff must do on AI generated data before relying on it and prohibitions against reliance on unbiased and unverified data.
- Honoring Te Tiriti o Waitangi: AI policies must recognise and protect Treaty of Waitangi rights. This includes ensuring that AI use does not disadvantage iwi and whānau Māori that health and community services work with and that data sovereignty and cultural considerations are respected.
Strategies to support an AI Policy
An AI policy is just the beginning for a workplace wanting to use AI. Like any policy, your AI policy needs to be backed up by a strong implementation strategy that includes the following
- Regular Audits and Assessments: Conduct regular audits of AI systems to ensure they operate as intended and comply with ethical standards.
- Training and Awareness: Provide training for staff on the responsible use of AI and raising awareness about potential risks and ethical considerations.
- Bias Mitigation Strategies: Implement strategies to identify and reduce biases in AI systems eg data checking, surveys and if affordable, bias detection algorithms.
- Robust Security Measures: Apply strong cybersecurity protocols to protect AI systems from threats and ensure the integrity of data.
- Transparent Decision-Making: Ensure through training and policy that staff responsibilities for AI use are clearly articulated, and AI-driven decisions are transparent and explainable.
- Cultural Safety and the Treaty: Use strategies like training, bias detection systems and iwi/community consultation to ensure that the rights of tangata whenua under the Te Tiriti o Waitangi are respected and protected with AI use.
Conclusion
AI brings benefits as well as risks especially for the social, community and health services we work with. To get the most out of AI and help protect against the risks, an AI policy is a “must.” It’s arguably the beginning of a new policy era when, in response to rapidly evolving technology, we need to revise and evolve policies at an equally fast pace.
AI Policy and Procedures – issues for social and health services
As Artificial Intelligence (AI) rapidly evolves, health and social service organisations need AI policy and procedures to guide their ethical and responsible use of AI.
We derive many benefits from AI. In the social service and health sectors, these include support with decision-making and diagnostics, efficiency gains, improved record management and evidence-based practice.
But there are ethical risks of AI use, which vary across different AI applications.
If not addressed in organisational policies and procedures, these risks could threaten the heart and soul of social and health services. We therefore need AI fit-for-purpose policies and procedures to guide us.
AI policy and procedures
Key issues to think about and canvas in your AI policy include data management, transparency, roles and responsibilities, misinformation and legal and regulatory compliance.
Data management
Used in health and other service areas, AI systems may collect a large amount of highly sensitive information about a person. This could be misused or used for non-consented or malicious purposes. The security of information collected and used through AI, must therefore be addressed in policies and procedures. Likewise, access to and sharing of the information that is collected.
n Aotearoa/NZ the Privacy Act 2020 and other regulatory standards like the Health Information Privacy Code 2020 must be complied with.
Transparency in your AI policy
If we’re using generative AI for advice, diagnosis and to help make decisions that affect people’s lives, then it’s important that we understand the basis of the advice and information it provides.
The right to give or refuse informed consent is integral to quality care and service. We also want to provide person-centred care. Before relying on AI to help us with service provision, we should therefore understand and be able to explain to those we serve the criteria and information on which the AI is based. This should reflect in our policies and procedures.
Other issues of transparency to address in your AI policy concern responsibilities for using AI, how the organisation uses AI in its services and activities and the associated risks.
Roles and Responsibilities
AI isn’t for everyone. It’s unrealistic to expect everyone to have a good grasp of it in your organisation. However, with AI likely to play an increasingly important role in your organisation, it’s important to think about AI responsibilities and to include these in your AI policy. Key responsibilities to cover include AI policy review and AI risk management.
Misinformation safeguards in your AI policy and procedures
As we already know, it is becoming harder to distinguish fact from fiction as the technology to generate misinformation so rapidly evolves.
The risks are potentially disastrous. They range from reputational damage and loss of trust in person and organisation through to grievous harm when mis-information is relied on as fact.
AI policy addressing data security is crucial. But it’s not enough.
If we’re going to rely on generative AI, we need safeguards in our AI policy for AI output to be checked and verified.
The strategies will vary, depending on the nature of the AI information, but can include:
- monitoring and tracking to see that AI achieved its intended purpose
- monitoring error rates
- review of AI-generated advice by subject matter experts
- checking AI information against other sources known to be reliable and credible
- monitoring and evaluating feedback from clients about the impacts of AI
Without these checks, your organisation becomes highly vulnerable to the serious impacts of misinformation.
Keeping your AI policy and procedures fit for purpose
AI technology evolves quickly, which makes it challenging to keep our policies and procedures fit for purpose.
Social and health services are required to keep their polices regularly reviewed and updated. At the Policy Place we support our members through regular reviews and updates of their policies based on a two or three year cycle.
However, with your AI policy and procedures, it may be necessary to prescribe more frequent reviews and updates.
Conclusion
Given the rapid rate of change, we could easily think we should just wait and see before we launch into an AI policy. But don’t be fooled. There’s nothing to wait for.
We might not get it right the first time or even the second time with our policy. But we can change and evolve our policies and procedures as we gain more understanding of AI system and risks.






