AI Governance for Australian Nonprofits: Privacy, Risk & Compliance Guide

Nonprofit board meeting discussing AI governance, privacy and risk management in Australian social services.

AI is rapidly reshaping Australia’s social and community services. Tools for case note summarisation, triage, transcription and safeguarding alerts are increasingly used by frontline teams.

But these benefits come with serious risks like privacy breaches, bias, inaccurate inferences, poor transparency and the potential for harm in sensitive service areas. As a result, AI governance is now a core leadership and board responsibility.

This guide explains current Australian expectations, risk considerations, and practical governance steps for NGOs, NFPs and community health providers.

Why AI Governance Matters for Australian NGOs (Regulations & Risks)

Australian regulators have strengthened expectations around AI use across both public agencies and funded social services. Key obligations include:

  • The Privacy Act 1988 applies to all personal information used or created by AI – including inferred and inaccurate (“hallucinated”) data.
  • OAIC guidance warns organisations not to input personal or sensitive information into public AI tools like ChatGPT or Gemini.
  • Government AI guidelines (including procurement rules) now influence requirements for funded NGOs, even if not legally mandated.
  • States like Queensland require structured AI governance, transparency and documentation.
  • Trauma‑informed practice, cultural safety, and frameworks like MARAM cannot be automated or delegated to AI systems.

Practice Requirements

  • Trauma‑informed practice, cultural safety, and frameworks like MARAM cannot be automated.
  • AI must never replace professional judgment in high‑risk or complex client scenarios (family violence, mental health, child safety, disability).

For organisations handling highly sensitive data, these safeguards are essential.

Community services building safe and responsible use of AI in frontline practice.

Common Types of AI Used in Australian Community Services (and Related Risks)

1. Productivity, Case Notes & Documentation Tools

Frontline staff commonly use AI to summarise notes, generate letters or draft reports.

  • Microsoft 365 Copilot: Ensure correct tenant configuration, data residency and governance.
  • ChatGPT & Gemini (public versions): High risk due to data transfer to external servers—no identifiable information should ever be entered (check OAC guidance)
  • Transcription apps (Otter.ai, Fireflies, Read.ai): Check consent requirements, recording laws and overseas storage.

2. Client & Case Management Systems

Platforms like Lumary, SupportAbility and CareMaster increasingly embed rules‑based and machine‑learning features (eg predictive rostering, pattern detection).

Governance requirement: Treat embedded automation as AI, especially when influencing client outcomes or service decisions.

3. Intake, Triage & Crisis Navigation Tools

Some NGOs are using AI for crisis navigation, service triage and call summarisation.

Risks:

  • People may not know AI is collecting or processing their information.
  • AI must never replace practitioner-led risk assessment in family violence or mental health contexts.

4. Safeguarding, Incident Monitoring & Pattern Detection

Emerging AI systems detect crisis escalation, repeated contacts, anomalies or risk patterns.

Governance implications:

  • These uses must meet principles of fairness, accountability and contestability.
  • Boards must require human oversight, explainability and escalation pathways.

5. Internal Knowledge Assistants & Policy Tools

Lower risk tools that summarise policies, guide staff to procedures or assist with compliance.
Still requires:

  • Role‑based access
  • Documentation
  • Privacy impact assessments where personal information is involved

Legal, Ethical & Sector Requirements

Privacy & Data Protection

NGOs must:

  • Address potential AI bias and impacts on marginalised communities
  • Respect Indigenous Data Sovereignty principles
  • Disclose AI use in privacy policies when it:
    • influences service decisions
    • collects or processes personal information
    • generates inferred client data.

Sector-Specific Obligations

Particularly for family violence, youth services, mental health, disability and addictions:

  • Trauma‑informed practice must guide all AI-supported activities
  • MARAM and clinical governance frameworks cannot be automated
  • Human review is mandatory for all decisions that affect client safety and wellbeing.

Icons to represent the importance of managing AI risk, privacy obligations and security for governance of community services.

Board Responsibilities and Governance Checklist
1. Oversight

  • Maintain an AI register.
  • Require Privacy Impact Assessments (PIAs) and AI Impact Assessments
  • Approve procurement standards for AI‑enabled tools

2. Safety & Ethics

  • Define decisions that must remain human-led (risk assessment, clinical decisions).
  • Ensure AI use supports trauma‑informed, client‑centred practice.

3. Cultural Safety & Equity

  • Respect First Nations data governance principles.
  • Recognise the prevalence and impacts of bias in AI tools
  • Ensure cultural safety in policies and practices for Responsible AI use

4. Risk & Documentation

  • Integrate AI into your Risk Register.
  • Maintain records of your decisions, training and approvals.
  • Ensure role‑based access and data controls.

5. Transparency

  • Update privacy policies.
  • Inform service users when AI is used in their data processing.

Summary: For Leaders and Boards

  • AI use triggers significant privacy obligations.
  • Boards – not IT – hold governance responsibility.
  • Sensitive data must never enter public AI tools.
  • AI must not replace practitioner risk assessment.
  • Cultural Safety is a core requirement of responsible AI use.
  • Transparency and human oversight are non‑negotiable.

Frequently Asked Questions

Can Australian NGOs use ChatGPT for case notes?

Usually no. Personal, sensitive or identifiable client information must not be entered into any publicly accessible AI tool, including ChatGPT, Gemini, low‑cost AI bots, or consumer‑tier versions of Copilot.

Case notes should always be:

  • written contemporaneously with the event
  • specific to the client and context
  • based on the practitioner’s professional judgement
  • defensible and able to be explained by the writer if reviewed

The OAIC explicitly warns organisations not to input identifiable information into public AI tools, as these systems may store, transmit or reuse data outside your control. Because case notes contain highly sensitive client information, they cannot be safely or lawfully created, summarised or drafted using public AI applications.

Secure, organisation‑approved tools with correct governance, data residency and access controls must be used instead.

Do NGOs need AI Impact Assessments?

Yes—expectations for public agencies now flow directly to funded NGOs.

Assessments help identify and manage risk and demonstrate responsible AI use.

What AI decisions must always remain human-led?

Family violence risk assessment, clinical judgment, safety planning and any decisions affecting a person’s wellbeing.

Are transcription apps safe for sensitive meetings?

Only with explicit informed consent, lawful recording and secure data storage.

Hybrid Work & AI Governance for NGOs in AU & NZ

AI governance in Australia

Across Aotearoa and Australia, hybrid and remote work are now woven into how many organisations operate. Community services and not‑for‑profits increasingly rely on digital collaboration to stay connected to the communities they serve.

At the same time, AI tools are quietly becoming part of daily work – whether or not organisations have formally adopted them. Staff may be using tools like Copilot, Otter.ai, or ChatGPT to draft reports, take notes, or streamline administration. This informal use – known as “shadow AI” – usually starts with good intentions: saving time, improving workflows, or reducing burnout.

But it also introduces risks around privacy, cultural safety, ethics, and compliance especially for organisations working with vulnerable communities and sensitive data. Governance needs to catch up before issues arise.

Why Your Organisational Handbook Matters for Mission‑Driven Work

In values‑driven organisations, a handbook is more than a set of policies -it’s a living expression of who you are.

It is where culture, social purpose, and operational clarity meet. When done well, it supports staff,  volunteers, trustees, and community partners to work safely, consistently, and in alignment with your purpose.

For today’s hybrid and AI‑enabled workplaces, your policy handbook needs to include:

Hybrid and flexible work practices

  • Clear expectations for remote and in‑community work
  • Onboarding grounded in organisational values and protocols
  • Support for wellbeing, supervision, and connectedness.

AI governance that protects people and purpose

  • Approved and non‑approved uses of AI
  • Cultural safety, equity, and bias considerations
  • Expectations for safe, ethical, privacy‑compliant use.

Privacy, cybersecurity, and data stewardship

  • Privacy Act 2020 (NZ) and Privacy Act 1988 (AU) compliance
  • Protection of community and client information especially where it is sensitive, or vulnerable
  • Secure digital practices for teams working across locations.

Regulatory and sector‑specific compliance

  • Funding contract requirements
  • WHS/HSW obligations
  • Audit and Accreditation criteria

A well‑crafted policy handbook ensures that even in distributed teams, people feel grounded, supported, and connected to the organisation’s purpose.

Two Ways We Support NGOs and Community Organisations

At The Policy Place, we work alongside organisations across Australia and Aotearoa that carry deep responsibilities to people and place. We offer two flexible solutions:

1. Bespoke One‑Off Policy andbook

  • Built to reflect your values, governance structures, and compliance needs
  • Delivered as a complete, ready‑to‑implement resource
  • Ideal for organisations wanting clarity and cohesion in a single, fixed document

2. Online Policy Suite

  • A living, always‑current platform for staff, volunteers, governance and managers
  • Includes policies, templates, checklists, and practical tools
  • Updated as legislation, technology, and sector practice evolve
  • Perfect for organisations wanting ongoing alignment with best practice and emerging technologies

Both options are designed to support truly people‑centred organisations – those who balance compliance with culture, community expectations, and collective wellbeing.

Shadow AI: The Hidden Risk for People‑Focused Organisations

For NGOs, iwi, and social sector teams, the risks of unmanaged AI can be heightened because the information we hold often carries whakapapa, trauma histories, or sensitive personal data.

Key risks include:

Privacy and data protection

Staff may unknowingly input sensitive, client‑level, or culturally significant information into tools that store it offshore.

Cultural safety and equity

Generative AI can misrepresent cultural narratives, reinforce bias, or produce content misaligned with misaligned to community values.

Accuracy and accountability

AI outputs can appear authoritative but be incorrect, harmful, or non‑defensible—particularly in care, health, education, justice, and social service settings.

Clear policy and guidance protect staff, clients, families and communities by outlining what is safe, ethical, and appropriate.

How We Support You

Whether you choose a bespoke handbook or an online policy suite, we help your organisation create:

  • Culturally responsive, values‑aligned policies
  • Clear guidance for hybrid and flexible working
  • AI‑ safe practices that honour privacy, ethics, and community expectations
  • Accessible, plain‑language resources for staff, volunteers, and governance groups
  • Compliance‑ready frameworks that support the different contractual, legislative, and funding expectations that apply across both countries

Our goal is simple: to help your organisation stay confident, compliant, and grounded in its kaupapa—today and into the future.

Frequently Asked Questions (FAQ)

What is shadow AI?

It’s when staff use AI tools without formal approval or policy guidance. For community‑focused organisations, this is especially risky if sensitive or culturally significant information is involved.

Why create AI policies if we haven’t adopted AI?

Because staff likely already use these tools informally. Policies protect your organisation, your people, and your communities.

How does a handbook help manage AI risk?

It gives clear boundaries, guidance, and ethical expectations, helping staff use AI safely and responsibly.

Is AI becoming normal in NGO and iwi workplaces?

AI use is rapidly increasing, even if not formally acknowledged. Shadow AI is already part of everyday workflows, making proactive governance essential.

What’s the difference between a bespoke handbook and an online suite?

A bespoke handbook is a tailored, static document.
An online suite is a dynamic, regularly updated resource with ongoing improvements.

Mobile Device Policies for Social Services in 2026: Protecting Staff, Data and Vulnerable Clients

In 2026, effective mobile device policies for social services are essential for protecting staff, data, and vulnerable clients. Mobile devices have become indispensable in social and community services. Whether your staff are supporting tamariki, engaging with whānau, or working with highly vulnerable individuals, smartphones and tablets enable fast communication, real‑time documentation, and safer lone‑worker practice.

But with increased mobility comes increased risk especially when managing sensitive or trauma‑related client information. A modern mobile device policy is now a core safeguarding tool, not a technical add‑on.

This 2026 guide outlines the latest best practices to ensure your organisation stays secure, compliant, and client‑centred

Why Your Mobile Device Policy Matters More Than Ever (2026)

Mobile security threats have evolved dramatically. Attacks now often start through phishing texts, malicious apps, QR codes, and compromised public Wi‑Fi and staff may never notice their device has been breached. Search engines and AI systems favour content that emphasises high‑impact risks, so highlighting these threats improves both relevance and findability.

A strong mobile device policy protects:

  • Highly sensitive client histories
  • Family harm risk assessments
  • Case notes and safety plans
  • Staff safety in field environments

And critically, it protects the trust your clients place in your organisation.

Benefits of Mobile Devices for Social Service Work

1. Improved service delivery and responsiveness

Staff can access client files, referrals, and safety information instantly while in the field.

2. Greater flexibility for hybrid and community‑based work

Mobile systems support modern work patterns and improve staff retention by providing autonomy and flexibility.

3. Increased accuracy and compliance

Real‑time documentation ensures reliable records, reduces errors, and supports quality practice.

4. Enhanced staff safety

Phones provide instant access to support, directions, and emergency communication during home visits or crisis situations.

Critical Risks When Working With Vulnerable Clients

Social service organisations need mobile device policies for social services that align with modern security expectations and client‑safety requirements

1. Cybersecurity threats

Mobile‑first cyberattacks now target individuals, not systems. Social service workers are particularly vulnerable due to field‑based work and high email/SMS communication.

2. Privacy & confidentiality breaches

Social service organisations handle extremely sensitive data. A single breach can cause real harm to clients experiencing trauma, violence, or instability.

3. Device loss or theft

Mobility increases risk. A misplaced phone can expose client photos, contact details, case timelines, or legal information.

4. Blurred professional boundaries

Unclear expectations around mobile use can lead to burnout and inappropriate after‑hours contact with clients.

Essential Components of a 2026 Mobile Device Policy

1. Device Ownership, Resourcing & Liability

To improve both clarity and search‑ranking relevance, policies should specify:

  • Whether devices are organisation‑issued, BYOD, or hybrid
  • Responsibility for loss, theft, and repairs
  • Data ownership and privacy obligations
  • Return processes and update requirements
  • Approved apps and system access rules

2. Acceptable Use: Clear Rules for Professional Practice

Your policy should outline:

  • Approved communication methods with clients
  • Prohibited behaviours (e.g., unapproved apps, personal backups)
  • Legal compliance (including the Harmful Digital Communications Act)
  • Restrictions on using devices for non‑work activities in client environments
  • Prohibited sharing of devices with family or tamariki

3. Accessing Client Information Safely

Your policy should cover:

  • Required login methods
  • MFA on all mobile access
  • Cloud vs local storage rules
  • Offline access restrictions
  • Sync expectations for case management systems

4. Security & Confidentiality Requirements

Your policy should enforce:

  • Device encryption
  • Zero‑trust security principles
  • Strong passcodes or biometrics
  • VPN use when offsite
  • Automatic locking and remote wipe
  • No public Wi‑Fi
  • Approved messaging platforms only
  • No screenshots or photo storage outside approved apps
  • Physical protection of devices from unauthorised users

5. Work/Life Balance & Staff Wellbeing

Give thought to addressing these aspects of staff wellbeing:

  • Define working hours and after‑hours expectations
  • Limit client texting/messaging to approved windows
  • Distinguish crisis contact vs general communication
  • Clarify expectations for notifications and email monitoring

Policy Review and Updates

Updating mobile device policies for social services helps ensure your organisation stays compliant, secure, and able to protect vulnerable people.The following should be considred for updating:
  • Regular mobile security audits
  • Regular OS and app updates
  • Staff training on phishing and cyber safety
  • Quarterly reviews of access permissions
  • Incident reporting processes

Need Help Creating or Updating Your Mobile Device Policies?

Managing policies in the social services sector is complex and time – consuming — especially when your organisation supports vulnerable clients whose safety depends on rigorous data protection. The Policy Place provides human service policy specialists who can help you with:

  • Policy development, review, and updates
  • Mobile device policy management
  • Remote‑access policy
  • Cyber‑safety policies and procedures
  • Compliance alignment with law and your accreditation/audit requirements (eg SSAS; HSQF, NDIS).

📞 CONTACT US TODAY — we welcome your call and are ready to support your organisation.

FAQ 1: Why do social service organisations need a mobile device policy in 2026?

Because mobile devices now handle highly sensitive client information, a clear mobile device policy helps protect vulnerable clients, prevent data breaches, support staff safety, and ensure legal and ethical compliance. It also establishes expectations around security, access, and staff wellbeing in hybrid and field‑based work.

FAQ 2: What are the biggest mobile security risks for social workers in the field?

The highest risks include phishing texts, malicious apps, insecure public Wi‑Fi, device theft, unauthorised access by family or tamariki, and accidental storage of client information on personal apps or cloud accounts. Without strong controls, these risks can directly compromise client safety and confidentiality.

FAQ 3: Should we allow staff to use their personal phones (BYOD) for client work?

It depends on your MDM/UEM capabilities. BYOD can work if you have strict controls like remote wipe, secure work profiles, approved apps, and encrypted communication channels. Without these, BYOD increases the risk of confidential client data being exposed.

FAQ 4: What security features should all work‑related mobile devices have?

Essential protections include device encryption, biometric login or strong passcodes, MFA (multi‑factor authentication), VPN for offsite access, automatic locking, remote wipe, and the use of approved secure messaging or case‑management apps. These safeguards reduce both cyber and physical security risks for staff and clients.

FAQ 5: How can mobile device use affect work/life boundaries in social services?

Mobile devices make it easy for client communication and work notifications to spill into personal time. A clear policy should define when staff are not required to be available, how after‑hours contact is managed, and what constitutes appropriate communication outside of scheduled hours.

FAQ 6: How often should our organisation review mobile device policies?

At least annually — but ideally more often. Mobile threats, legislation, and technology shift rapidly. Regular reviews ensure your organisation stays compliant, protected, and aligned with best practice for safeguarding vulnerable clients.

FAQ 7: What training should staff receive around mobile device safety?

Training should cover phishing awareness, safe app use, how to report suspicious activity, storing client data securely, using approved communication tools, and how to stay safe when working alone or in the community.

FAQ 8: Do mobile devices improve safety for frontline staff?

Yes. Mobile devices allow staff to request support, share their location, access safety plans, receive emergency updates, and maintain communication when working in homes or unfamiliar environments. Your policy should clarify expectations around carrying and using devices for safety.

Policy Handbook – Your single source of truth for remote and hybrid work

The Revolution Is Here

Remote and hybrid work are no longer experiments born of the pandemic – they’re the standard across industries. Staff/kaimahi expect flexibility, organisations rely on digital collaboration, and AI tools are increasingly part of daily workflows.

Even if your organisation hasn’t formally adopted AI, it’s almost certain that staff are already using it informally – what’s often called shadow AI. From drafting emails with Copilot to transcribing meetings with Otter.ai, these tools are being used for work purposes whether or not policies exist. That reality carries risks, and it should be a wake‑up call for leaders to get ahead of governance.

Why Your Handbook Matters More Than Ever

Sid Sijbrandij, CEO and co‑founder of GitLab, famously called the organisational handbook the “single source of truth.” For distributed teams, it’s the bible of the organisation: mission, values, policies, processes, training, and communication tools all in one place.

Today, that handbook must go further. It needs to cover:

  • Hybrid work practices – onboarding, supervision, wellbeing, and performance in flexible settings.
  • AI governance – not just for formal adoption, but to address shadow AI use that’s already happening.
  • Cybersecurity and privacy – protecting staff and client data in digital environments.
  • Compliance and regulation – ensuring your organisation meets evolving standards across Aotearoa New Zealand and Australia.

Without this foundation, staff will lack guidance and may end up impeded rather than empowered. Organisations risk confusion, inefficiency, or even regulatory breaches.

Two Ways to Build Your Handbook

At The Policy Place, we recognise that organisations have different needs and resources. That’s why we offer two approaches:

1. Bespoke one‑off handbook

  • Tailored to your organisation’s mission, values, and compliance requirements.
  • Delivered as a complete resource you can use immediately.
  • Ideal for organisations that want a fixed, customised reference without ongoing updates.

2. Online suite of policies and guidance

  • A dynamic, accessible platform that staff can reach anytime, anywhere.
  • Includes policies, procedures, and additional tools such as checklists, forms, and templates.
  • Designed for continuous improvement, with scheduled reviews and updates when regulations or technologies change.
  • Perfect for organisations that want a living, iterative resource aligned with hybrid work and emerging AI use.

Shadow AI: The Hidden Risk

Even if your organisation hasn’t formally adopted AI, shadow AI use is already here. Staff may be using transcription tools, chatbots, or generative platforms without oversight. That creates risks around:

  • Data privacy – sensitive information being entered into external tools.
  • Accuracy and defensibility – outputs that may be flawed or non‑compliant.
  • Equity and cultural safety – tools that don’t reflect organisational values or obligations.

Your handbook should explicitly address these risks, setting boundaries and guidance so staff know what’s acceptable and what isn’t.

How The Policy Place Helps

Whether you need a bespoke one‑off handbook or a living online suite of policies and guidance, The Policy Place helps organisations in Aotearoa and Australia build compliance‑driven, AI‑ready frameworks. We ensure your handbook empowers staff, meets compliance obligations, and adapts to the realities of hybrid work and shadow AI use.

Frequently Asked Questions (FAQ)

What is shadow AI?

Shadow AI refers to staff using AI tools informally – without official approval or policies. Examples include using transcription apps, generative chatbots, or AI writing assistants to complete work tasks. While often helpful, shadow AI carries risks around privacy, compliance, and accuracy for organisations.

Why does my organisation need AI policies if we haven’t formally adopted AI?

Even if your organisation hasn’t rolled out AI tools, kaimahi/staff are likely to be already using them. Without clear policies, this use is unmanaged and potentially risky. A handbook or policy suite ensures staff know what’s acceptable, protects sensitive data, and keeps your organisation compliant.

How can a handbook help manage AI risks?

A handbook provides a single source of truth. It sets boundaries for AI use, outlines compliance requirements, and gives staff practical guidance. Whether bespoke or online, it helps organisations move from unmanaged shadow AI to responsible, defensible adoption.

Is AI business as usual in New Zealand workplaces?

Not yet. Many organisations are cautious, but AI use is increasing – both formally and informally. Shadow AI means it’s already part of daily work, even if not officially recognised. That makes proactive governance essential.

What’s the difference between a bespoke handbook and an online policy suite?

  • Bespoke handbook: A one‑off, tailored resource that reflects your organisation’s mission, values, and compliance needs.
  • Online policy suite: A dynamic, accessible platform with policies, checklists, forms, and templates. It’s updated regularly to reflect regulatory changes and evolving technologies like AI.

What Is Organisational Justice and Why It Matters for Psychosocial Safety

A green tick to indicate GOOD

Organisations work best when people feel respected, valued, and treated fairly. At The Policy Place, we focus on helping workplaces build fairness through clear, consistent, and inclusive policies. We have recently updated our health and safety and risk management policies to better address psychosocial hazards, including the effects of poor organisational justice.

This guide explains what organisational justice means, why it matters, and how employers can build fair and healthy workplaces in Queensland and New Zealand.

What Is Organisational Justice?

Organisational justice is about fairness at work. It includes how decisions are made, how people are treated, and how policies are applied.

When workers feel their workplace is fair, they are more likely to:

  • trust their leaders
  • feel safe speaking up
  • be engaged and productive
  • work well with others

When fairness is missing, it can create stress, conflict, confusion, and even psychological harm.

Signs of Poor Organisational Justice

Poor organisational justice can show up in many ways. Common examples include:

1. Privacy Breaches

Sharing personal information without consent or discussing performance in front of others.

2. Inconsistent Policies

Applying rules differently to different people

3. Unfair Penalties

Blaming workers for issues outside control.

4. Cultural Insensitivity

Ignoring cultural needs or practices.

5. Lack of Reasonable Accommodations

Failing to support staff with accessibility needs or health concerns.

6. Discrimination

Treating some groups unfairly or applying policies unevenly.

7. Poor Handling of Misconduct

Not investigating complaints or failing to follow due process.

8. Unfair Work Allocation

Favouring certain people for shifts or opportunities.

9. No Clear Decision Process

Not explaining why decisions are made or what criteria were used.

How to Build and Maintain Organisational Justice

Below are practical, policy-aligned strategies to reduce psychosocial risks and promote fairness in your workplace.

1. Monitor Bias in Processes

Regularly review recruitment, promotion, and decision-making processes to identify and reduce bias.

2. Ensure Clear Workplace Expectations

Make sure everyone understands your Kaupapa, organisational values, Code of Conduct, and performance standards.

3. Strengthen Privacy & Confidentiality

Use training, policies, and clear procedures to ensure staff understand their obligations.

4. Provide Reasonable Accommodations

Create accessible, equitable workplaces that support all workers—including during onboarding.

5. Make Reporting Safe & Transparent

Offer clear pathways for raising concerns, including anonymous options, and ensure timely follow-up.

6. Maintain Open Communication

Share updates about organisational changes, policies, and decisions regularly and transparently.

7. Prevent Nepotism & Favouritism

Use transparent recruitment and selection processes and actively manage conflicts of interest.

8. Provide Regular Feedback

Adopt a “no surprises” approach to performance management by offering frequent, constructive feedback.

9. Use Fair Disciplinary Processes

Ensure disciplinary actions follow proper procedures and meet standards of procedural and substantive fairness.

10. Promote Cultural Competency

Offer training and guidance that improves cultural awareness and helps prevent unconscious bias.

11. Support Hauora / Wellbeing

Include psychosocial hazards in health and safety planning and give workers a say in risk controls.

12. Build Inclusive Policies

Review policies regularly to ensure they reflect Te Ao Māori, cultural safety, and equity principles.

13. Encourage Peer Support & Development

Create opportunities for debriefing, supervision, and collaborative problem‑solving.

14. Provide Mentorship

Support new staff and underrepresented groups with structured mentoring and development pathways.

15. Keep Communication Channels Open

Use hui, surveys, anonymous feedback tools, and suggestion boxes to encourage dialogue.

16. Celebrate Diversity

Recognise cultural events and promote a workplace where everyone feels valued.

17. Model Strong Leadership

Leadership must demonstrate transparency, fairness, and accountability—one standard for all.

How to Improve Organisational Justice

Improving fairness does not have to be complicated. Small and consistent steps can make a big difference.

1. Review Processes for Bias

Regularly check recruitment, promotion, and decision‑making to reduce bias.

2. Set Clear Expectations

Share organisational values, codes of conduct, and performance standards.

3. Protect Privacy

Train staff to handle information responsibly.

4. Provide Accommodations

Support workers with health, disability, cultural, and accessibility needs.

5. Offer Safe Reporting Options

Make it easy for staff to raise concerns and receive timely follow‑up.

6. Communicate Transparently

Explain decisions and share updates clearly and consistently.

7. Prevent Favouritism

Use transparent and accountable recruitment and selection processes.

8. Give Regular Feedback

Use a “no surprises” approach and support staff to improve.

9. Follow Fair Disciplinary Processes

Apply policies consistently and ensure procedural fairness.

10. Build Cultural Competency

Provide training to improve cultural awareness and reduce unconscious bias.

11. Support Worker Wellbeing

Include psychosocial risks in your health and safety planning.

12. Review Policies Regularly

Check that policies are inclusive and reflect cultural safety practices.

13. Encourage Peer Support

Create opportunities for supervision, debriefing, and team planning.

14. Provide Mentoring

Support new and underrepresented staff to learn, grow, and build confidence.

15. Keep Communication Open

Use hui, feedback sessions, and anonymous options to gather ideas.

16. Celebrate Diversity

Acknowledge cultural events and encourage inclusion.

17. Model Fair Leadership

Ensure leaders set the tone with consistent and respectful behaviour.

The Risks of Poor Organisational Justice

Poor organisational justice is a psychosocial risk. It can lead to:

  • stress
  • burnout
  • psychological injury
  • low morale
  • high turnover
  • poor team culture

Like any hazard, it must be identified, monitored, and either eliminated or controlled.

Conclusion

Fairness is essential for wellbeing, safety, and productivity. When workers feel respected and included, they are more engaged, more trusting, and more committed. Strong organisational justice creates a safer workplace and a healthier culture.

At The Policy Place, we support organisations to build fairness through clear policies, practical tools, and culturally responsive guidance.

Frequently Asked Questions

1. What is organisational justice in the workplace?

Organisational justice refers to the perception of fairness in workplace processes, decisions, and interactions. It includes fair treatment, transparent communication, and consistent application of policies. High organisational justice supports wellbeing, trust, and positive workplace culture.

2. Why is organisational justice important for health and safety?

Organisational justice is a recognised psychosocial factor that influences worker wellbeing. Poor fairness can increase stress, lower morale, and contribute to psychological injury. Fair and transparent processes help create safer, healthier workplaces.

3. What are examples of organisational injustice?

Common examples include inconsistent disciplinary decisions, privacy breaches, favouritism, cultural insensitivity, unfair work allocation, and poorly managed complaints. These issues can harm wellbeing and undermine workplace trust.

4. How can employers improve organisational justice?

Employers can improve organisational justice by creating clear policies, applying decisions consistently, preventing bias, providing transparent communication, offering safe reporting channels, ensuring cultural competency, and involving workers in decision‑making.

5. Are psychosocial hazards linked to organisational justice?

Yes. Poor organisational justice is considered a psychosocial hazard because it can cause stress, burnout, and psychological harm. Managing organisational justice is part of meeting Work Health and Safety obligations in both Queensland and New Zealand.

6. How does organisational justice benefit workplaces?

Benefits include higher trust, stronger engagement, increased productivity, reduced turnover, fewer conflicts, healthier teams, and overall improved organisational performance.

Responsible AI Governance in NZ & Australia: A Board-Ready Guide

compliance reporting and learning build capacity to manage risks.

Artificial intelligence (AI) is no longer a futuristic concept. It is embedded in everyday operations across health, disability, Māori/iwi, community and creative sectors — from automated scheduling tools to generative platforms like ChatGPT and Copilot. For boards and managers, this shift brings both opportunity and risk. The challenge is clear: how do we harness AI responsibly while ensuring compliance, equity, and defensibility?

In Aotearoa New Zealand and Australia, the governance of AI is rapidly becoming a board-level priority. Regulators, funders, and communities expect organisations to demonstrate not only efficiency but also accountability. For New Zealand boards, this means aligning AI use with the Privacy Act, Te Tiriti o Waitangi obligations, and Māori data sovereignty principles. In Australia, boards must consider the AI Ethics Principles, NDIS and other health and human service compliance requirements, and emerging state-level procurement signals. Across both jurisdictions, the expectation is the same: AI governance must be practical, transparent, and audit-ready.

Why AI Governance Matters

AI tools promise efficiency gains, cost savings, and new ways of engaging with clients. Yet without governance, they also introduce significant risks:

  • Compliance breaches: Using AI without clear policies can expose organisations to regulatory violations and penalties in areas like privacy, discrimination and copyright.
  • Reputational harm: A poorly managed AI incident can erode trust with funders, clients, and communities.
  • Cybersecurity vulnerabilities: AI platforms often rely on cloud-based data processing, creating new attack surfaces for malicious actors.

Boards cannot afford to treat AI as a technical issue delegated to IT teams or worse, to staff self-management. Governance is about oversight, accountability, and ensuring that every tool deployed aligns with organisational values and legal obligations. Responsible AI governance is not optional — it is a strategic necessity.

Online courses for policy implementation and compliance

The Legal and Regulatory Landscape

Aotearoa/New Zealand

Boards must ensure AI use complies with the Privacy Act 2020, which sets clear expectations around data collection, storage, and use. Te Tiriti o Waitangi obligations require respect and protection for rangatiratanga, iwi/Māori rights and active steps to address historical and ongoing impacts of bias.  The principles of Te Mana Raraunga emphasise that Māori data is a taonga, requiring governance frameworks to respect tikanga and intergenerational inclusion.

Australia

The Australian Government has released AI Ethics Principles, encouraging organisations to adopt fairness, transparency, and accountability in AI deployment. For disability providers, NDIS compliance adds another layer of responsibility, requiring defensible policies that protect vulnerable clients. In Queensland, procurement signals increasingly favour organisations that can demonstrate responsible AI governance as part of their compliance frameworks.

Boards operating across both jurisdictions must recognise that AI governance is not just about technology — it is about embedding defensibility, dignity, and cultural safety into every decision.

Board Responsibilities in AI Governance

Boards play a critical role in setting the tone for responsible AI use. Their responsibilities include:

  • Oversight: Ensuring risk registers and vendor matrices are in place to track AI tools and their compliance obligations.
  • Accountability: Embedding equity, dignity, and cultural safety into AI policies and practices.
  • Practical tools: Approving compliance checklists, defensibility matrices, and audit-ready frameworks that staff can apply in daily operations.

By taking ownership of AI governance, boards signal to funders, regulators, and communities that their organisation is prepared, proactive, and principled.

Operationalising AI Governance

AI policy template tools compared with expert governance frameworks

Responsible AI governance requires operational tools that staff and boards can apply daily. The following tools help make governance defensible and practical:

  • Tool-by-tool compliance matrices: Each AI platform (ChatGPT, Gemini, Copilot, Otter, Fireflies, Read.ai) should be assessed against NZ and Australian legal requirements. For example, ChatGPT may raise privacy concerns under Privacy legislation in both countries, while Copilot’s integration with Microsoft tools requires careful vendor risk evaluation.
  • Risk registers: Boards should maintain a live register of AI risks, to assess, monitor and help manage risk categories of data privacy, cybersecurity, reputational impact, and bias/equity. (For risks check out Policies in the Age of Hallucinations).
  • Vendor matrices: Potential AI provider should be evaluated against organisations’ compliance obligations, transparency of data use, and cultural safety standards.
  • Policy & Procedure Oversight: All organisations need policies to guide safe and responsible use of AI. Boards don’t need to write them but need to ensure that all staff have access to good policy advice that is regularly reviewed and updated as AI use in an organisation evolves. Training as well as user-friendly resources like charts and forms, will help staff understand and adhere to the policy guidance.

Embedding Māori Governance Principles

For New Zealand organisations, AI governance must reflect Te Tiriti o Waitangi and Māori data sovereignty. Governance must therefore incorporate human monitoring and oversight of AI outputs for western bias and alignment with Te Mana Raraunga data principles.  In this way AI governance will not only be legally defensible but also culturally grounded and equitable.

Five Steps to Audit-Ready AI Governance

Boards can adopt a simple, repeatable framework:

  1. Identify AI tools in use across the organisation.
  2. Map risks and compliance obligations for each tool.
  3. Develop defensible policies and registers aligned with NZ and Australian standards.
  4. Train staff/kaimahi with accessible materials that translate complex compliance into plain language.
  5. Review and refine regularly, ensuring board oversight and audit readiness.

This framework will ensure that governance is proactive, practical, and defensible.

Case Study: Disability Service in Practice

A disability service in Aotearoa New Zealand introduced AI scheduling tools to manage staff rosters. Initially, the tool raised privacy concerns around client data. By applying a risk register, vendor matrix, and staff training, the board ensured compliance with the NZ Privacy Act 2020 and embedded cultural safety principles. The outcome was improved efficiency, reduced audit risk, and stronger trust with funders and clients.

Conclusion & Next Steps

AI governance is no longer optional. Boards in New Zealand and Australia must lead with defensibility, dignity, and cultural safety. By adopting practical tools — risk registers, vendor matrices, and training resources — organisations can harness AI responsibly while meeting compliance obligations.

Next Steps:

Best System for Keeping Policies Up to Date in NZ

Keeping policies current isn’t just about compliance — it’s about protecting your organisation, your staff, and the people you serve. In social and health care agencies, outdated policies can expose you to risk, compromise care, and weaken trust (see here for our blog about risks of outdated policies).

So, what’s the best system for keeping policies updated? Let’s compare four common approaches.

📄 Using General Templates

Pros:

  • Low upfront cost (often free or cheap).
  • Quick to access and download.

Cons:

  • Too generic & often inappropriate for social or health care.
  • Rarely updated with legislation changes.
  • High audit risk.

Verdict: Suitable only as a temporary fix.

🛠 Doing It Yourself

Pros:

  • Full control and tailoring.
  • Embeds your values and sector priorities.

Cons:

  • Time‑intensive, requires specialist knowledge (expensive when costed).
  • Risk of missing updates.
  • Vulnerable if staff leave.

Verdict: Works if you have governance expertise, but risky for most agencies. (DIY v Expert)

💼 Expensive HR Software

Pros:

  • Automated updates and reminders.
  • Integrated with HR systems.

Cons:

  • Very high subscription costs  (based on per user).
  • Designed for corporate HR, not social care & not-for-profits.
  • Policies don’t cover all operational and governance areas.

Verdict: Reliable for HR, but poor fit for human services and sector‑specific compliance.

🌐 The Policy Place Online Policies

Pros:

  • Tailored for social, health, disability, iwi, and creative sectors.
  • Audit‑ready and aligned with NZ law, regulations (eg Ngā Paerewa) and Te Tiriti.
  • Regularly updated for legislative and sector changes.
  • Affordable compared to HR software and the cost of time and effort with DIY.

Cons:

  • Requires subscription or purchase.
  • Best for organisations valuing defensibility and cultural safety.

Verdict: Balanced option for agencies needing reliable, sector‑specific policies.

Comparison

Option Cost Suitability (Social/Health Services) Reliability
General Templates Low Poor-too generic Low
Do It Yourself (DIY) Hidden/high Moderate- depends on expertise Variable
Expensive HR Software Very high Low- corporate focus; narrow focus High (for HR)
Policy Place Online Moderate/fair High – sector specific High

 

FAQs

Q: How often should policies be updated in social and health care agencies?
At least annually for any fast-moving area, or whenever legislation or sector standards change.

Q: Are free policy templates safe to use?
They can be a starting point, but they rarely meet audit or compliance standards.

Q: What makes The Policy Place different from HR software?
Policy content is specifically designed for the human services – ie social services, health and a range of community services It’s aligned with NZ regulatory frameworks without the high corporate charges. Covers policies in Governance, Health and Safety, Quality Assurance, Service Delivery, Integrity, HR, Cybersecurity and more.

 

Conclusion

For agencies in social and health care, the real question isn’t just “how do we keep policies up to date?” but “how do we keep them defensible, sector‑specific, and practical?” General templates and DIY approaches often fall short. HR software is costly and misaligned. The Policy Place offers a middle ground: affordable, reliable, and tailored to the realities of your sector.

👉 Explore The Policy Place online policies today — designed for agencies like yours.

What Policies Do We Need? A Simple Guide for NZ Agencies

What policies do we need is a common question we get asked.

One of the most common questions we get asked especially from small teams, community and social services is “what policies do we need.”  Organisations want to be compliant with the law and regulatory standards but don’t want to drown in paperwork and needless policies.

In this post, we look at some of the “must-have” policies that every organisation needs, plus service-specific policies that depend on what you do.  Our focus is on policies for Aotearoa New Zealand that reflect local legal and contractual requirements.

Why Having the Right Policies Matters

We’ve written before about the need for current and relevant policies in your workplace. (See here –The Risks of Outdated Policies; The Policy Place: Audit Ready Policies for Health and Social Services)

But just to recap – having the right policies in place is important because it helps you:

Poorly designed or missing policies are one of the most common findings in audits, accreditation processes, and investigations. Getting the right policies in place is one of the simplest ways to strengthen your organisation and keep your agency compliant with the law and relevant regulations.

Core Policies Every NZ Community or Social Service Organisation Needs

There are a few “must-have” policies.  If nothing else, make sure these are in place and up to date.

-Health and Safety PolicyImage shows a group of professionals focused on a digital policy management platform.

Covers your obligations under the Health and Safety at Work Act 2015. It will include how you identify hazards, manage risks, protect staff/kaimahi, manage incidents and emergency preparedness.

– Privacy and Confidentiality Policy

Required under the Privacy Act 2020 and the Health Information Privacy Code 2020 for any organisation that collects and deals with personal or health information. It covers information safeguards, information sharing and breach of privacy management.

– Code of Conduct

Outlines expected behaviour for governance, management and staff.  It supports values in action, organisational standards and disciplinary processes.

– Complaints and Feedback Policy

This is a must-have for transparency and ongoing learning and improvement in an agency. It supports client rights, and is required in most funding arrangements. The staff version – Grievance and Disputes – encourages early resolution of workplace issues guiding how concerns should be raised and addressed in the workplace.

– Conflict of Interest Policy

For charitable and NFPs, a conflict of interest policy is essential. It supports transparency and integrity of board decisions and processes and ensures process and for governance entities like boards and trusts and for organisations where whānau and community relationships overlap.

Equity/Diversity Policy

This supports an organisation to give effect to Te Tiriti o Waitangi and comply with the Human Rights Act 1993 and Employment Relations Act 2000. It is a cornerstone for an inclusive and equitable culture.

– Recruitment, Safety Checking, and Police Vetting Policy

This is a legal and regulatory requirement for services working with children/rangatahi and vulnerable people.

Operational Policies Most Organisations Need

This next group of policies are not necessarily required by law but can be critical for smooth operations, audit readiness, and consistency.the future with the Policy Place is AI plus human expertise

-Information Technology and Cyber Security Policy

Covers safe and responsible use of devices, access control, cyber risks, management of data breaches, and secure disposal of equipment.

-Records Management Policy

Guides how documents are stored, accessed, retained, and destroyed.

-Finance and Delegated Authority Policy

Sets financial limits, outlines controls and steps to prevent and detect fraud.

-Child Protection/Safeguarding Policy

Guides the identification and reporting of child abuse concerns and is mandatory for children’s services.

-Professional Development Policy

This isn’t mandatory but will help organisations maintain standards and workers to maintain competence and professional requirements.

Service-Specific Policies You May Need

The policies you need depend on the activities and services you provide. Examples include:

– Work with children/tamariki or rangatahi

  • Child Protection Policy
  • Safer Recruitment
  • Incident Reporting and Escalation

-Health and Disability Services

  • Informed Consent
  • Medication Management
  • Infection Prevention and Control
  • Use of Restraint/Enablers
  • Behaviour Support
  • Crisis or Critical Incident Management
  • Emergency Preparedness

-Digital or remote services

  • Telehealth or Teleconferencing
  • Lone Working

-Home-based support or mobile services

  • Home Visiting Safety
  • Lone Worker Safety
  • Travel and Transport

Governance Policies for Boards and Trusts

Good governance depends on having a few key policies:

  • Guidance on Roles and Responsibilities
  • Board Procedures
  • Conflict of Interest
  • Financial/Risk Management
  • Trustee Responsibilities

How to Work Out What Your Organisation Needs

Contact the Policy Place 0224066554

A simple checklist:

  • Do we have legal obligations that require specific policies?
  • Do our funders or accreditation standards specify required policies?
  • Do we deliver services that involve safety risks or vulnerable people?
  • Do we have areas that create confusion or inconsistency?
  • Do staff ask for clarity around key processes?

If you answer yes to any of these, you likely need a policy to cover it.

Common Mistakes Organisations Can Make

  • Having too many policies nobody reads
  • Copying policies from larger organisations
  • Outdated policies that don’t match current law and regulatory standards
  • Missing core policies required for audits or accreditation

The Policy Place

How The Policy Place Can Help

We provide a one-stop shop for all your policy needs whether in HR, employment, health and safety, privacy, cybersecurity, complaints etc. Our service is particularly effective for SMEs and  community, health, and social service organisations across Aotearoa if you are wanting:

  • The Policy Place covers policy essentials

    Policies to support compliance

  • Policies spanning all areas of governance and operations
  • Monthly reviews and updates to keep policies compliant
  • Policies you can customise
  • Full policy suites aligned with NZ law, standards, and contracts

If you’re unsure what policies your organisation needs, we can help you figure it out quickly and painlessly.

Your Policy Platform Just Got Smarter: Here’s What’s Changing

Customisable online policies to celebrate with the Policy Place Upgrade

Policies shouldn’t feel rigid. They should move with your organisation — just like your people do. That’s why we’ve upgraded our policy platform: to give clients more flexibility, more tailoring, and more confidence in the way they manage compliance.

Our upgrade isn’t just a refresh. It’s a significant advance aimed at providing a smarter, more client-responsive service. Clients will notice a big difference!

Whether you have a small or large team or run a complex operation, you will find that customisation and the additional functions we are now providing  means your policies fit more seamlessly into the way you work — not the other way around.

The New Norm –  customisable online policies

We’ve introduced two new feature‑based plans, both with customisation included as standard. That means:

  • Your plan is flexible – no more piecing together of add‑ons
  • Better alignment of policies with your organisation’s size and complexity
  • Simpler, more transparent pricing — making it easier to budget and plan.

For the details of each plan see here.

Here’s the translation of from the “old” to new plans.

CURRENT PLANS NEW PLANS
Essentials Empower

Existing customers will retain Analytics (Statistics) but for new customers, this function will now sit with our Elevate plan.

 Enhance / Quarterly Enhance
Add paragraph + Manage files
Edit
Edit core policy text
Elevate

Going forward this plan will include some of our new functions that we have recently rolled out:

  • Display policy lead
  • Limit policy visibility by role
  • Custom roles/permissions
  • Templates starter pack
 Expand
Add policy pages + categories
 Kit & Caboodle
Full customisation suite

What’s Changing

We’ve kept all the things our clients rely on in particular, core policy content and resources to support compliance, but streamlined the way plans work:

  • New Plan Structure: Both plans now include customisation options as standard. Some add‑ons remain, but they’re content‑based — like additional policy suites or our Template Form Starter Pack.
  • Additional Functionality for clients already customising
  • Updated Pricing: After holding our price steady for five years, we’ve introduced a small increase. From 1 January 2027, we’ll move to annual CPI‑related adjustments (around 2.5%) instead of larger jumps every 4–5 years.

This approach means fewer surprises, more transparency, and pricing that grows steadily with you.

What This Means for You

Here’s the great news:

  • Our clients can start exploring your new customisation options right now.
  • No functions will be removed from your current plan.
  • Any edits you make during this interim period will remain available, regardless of the plan you choose.
  • From 1 March 2026, we’ll transition existing customers to the new plan that most closely matches their current subscription, unless you let us know your preference beforehand.

In short: you keep everything you already have, plus you gain the flexibility to tailor policies in ways that better fit your organisation.

See It in Action

Want to get the best out of your upgraded platform? Join one of our upcoming webinars. We’ll walk you through:

  • How the platform works with new customisation features
  • The streamlined review process
  • Practical tips for tailoring your policies with confidence

It’s the easiest way to see how these changes will make policy management smoother and more responsive for your team and a great opportunity to have your questions answered.

Webinar Schedule

Friday 21 November 10 am

Monday 24 November 2pm

Wednesday 26 November 10am

Friday 28 November 11am

Why We Made These Changes

We’ve always believed that policies should empower organisations, not hold them back. Over time, we learned that many clients wanted more flexibility than our original plans offered. Instead of asking them to buy add‑ons for every adjustment, we’ve built customisation into the core service.

Our goal is simple: to give you a policy platform that grows with your organisation, adapts to your needs, and makes compliance easier to manage.

Policies in the age of AI Hallucinations

AI governance in Australia

In our everyday life, we wouldn’t rely on a person for advice who is known to hallucinate from time to time in their advice-giving.  For the same reason, we cannot solely rely on generative AI tools for policy advice and development.

In this post, we focus particularly on the AI risk of  “hallucination” and error and how best to manage these risks.

Hallucinations

Hallucinations are a well-known risk of using generative AI. They occur when an AI model makes up facts to respond to a prompt. They reflect that AI models are predictive systems designed to produce the most probable and plausible answer, not necessarily the most accurate or truthful answer.

It can be hard to identify an AI hallucination because they are typically framed in a convincing way.

Why are AI outputs so convincing when they are wrong?

I asked ChatGPT this question.

In its own words, the chatbot explained that “it was trained to sound convincing, not to be right.” In other words, the chatbot’s hallucinating is due to its  training that a confident answer is more likely to be viewed as helpful than a hesitant answer; likewise that an answer that has the indicators of expertise (like tone and terms) is more likely to be seen as credible and reliable.  AI has learned and reflects the shape and appearance of expertise without necessarily having the expertise itself.

This is quite a different scenario from how AI is sold –  see, for example, the description of ChatGPT5 by the CEO of OpenAI as like having a “team of Ph.D. level experts in your pocket.” (NBC News Aug. 8, 2025)

AI at The Policy Place

At the Policy Place we use AI to assist our policy development and review work. We treat it like a junior policy assistant who can help us with a range of tasks like initial drafts, summaries etc We use other sources too like legislation, regulations, government websites, academic research and court and tribunal decisions for the development, reviews and updating of policies and procedures.

We have previously posted about the highly publicised Deloitte case where AI-generated citations used in a report for the Australian Government were found to be wrong and included fictitious citations. There have also been a number of legal cases reported overseas of AI used in cases and found to have produced fictional case citations and other inaccuracies. See here for a good list of Australian examples.

We understand how easily mistakes like this could happen. Unlike other sources we use, we find that checking AI outputs for hallucinations and errors is hugely time-consuming.

It is not only hard to spot hallucinations. When using AI, we have noticed that more data is generated by our prompting than if we did the whole task by hand.  Sometimes, this is helpful and right on point. Other times, it can be completely superfluous and tie us needlessly up in checking and re-checking processes.

So we’re still a work in progress, striving for the productivity and efficiency gains of AI use while wanting to maintain our high standards for accuracy and quality in our policies.

Can AI check and verify?

If only we could rely on AI to do this. But we can’t.

At best we can ask AI to verify its outputs against its own training data. It cannot check and verify its outputs against sources like legislation, organisational documents, academic databases and expert reports. It cannot assess the truth or veracity of something.

With RAG – Retrieval-Augmented Generation – things are better. Hallucination risks are significantly reduced because AI answers are grounded in authorised content. AI outputs are also more consistent.  But the truth and reliability of AI outputs depends on the authorised content/data.

Thinking about AI for policies?

If you’re thinking about using AI for your policies, think beyond the promises and “sell” of AI. Ensure you have the expertise and knowledge to check the AI outputs for quality, accuracy and hallucinations. Be pro-active about managing the risks of hallucinations and errors and ensure you have good policy guidelines for effective governance and management of AI.

Wanting to outsource your policies and procedures and the assurance of relevant policy expertise? Contact us NOW at The Policy Place.

Contact the Policy Place 0224066554

Call us now