Why You Need a Comprehensive AI Policy

Artificial intelligence (AI) is becoming integral to many industries in Aotearoa, including social and health services. While AI offers benefits, it also poses significant risks that need to be addressed through comprehensive AI policies. That’s why we at the Policy Place have recently released our new AI policy for our online policy clients.

In this blog we consider the importance of having an AI policy in social and health service agencies, the risks of not having  a policy and some of the key things to cover in an AI policy for community, social and health services. For our previous post on AI use in social and health services see here.

The Rise of AI in Workplaces

Artificial intelligence is no longer a futuristic concept; it is actively shaping how organisations operate.

The 2024 Work Trend Index Annual Report from Microsoft and LinkedIn released in May this year, found that AI is prevalent in the workplace worldwide. Key findings highlighted that AI use is pervasive in global workplaces and that AI use is beneficial in terms of time-saving, efficiency gains and adding to the enjoyment of work.

However, the Report also identified pervasive risk with AI use; that, in workplaces without an AI policy or other guidance 78% of employees had taken things into their own hands and were bringing and using their own AI tools at work.

The Risks of AI Use without AI policies and guidance include: 

  • Data Security Risks: AI systems can be vulnerable to cyber-attacks, which can lead to data breaches and loss of sensitive information. Without an AI policy, staff may input personal information and sensitive organisational data.
  • Ethical and Legal Risks:  AI use can lead to ethical dilemmas and legal issues, such as unauthorised use of personal data, breach of copyright and AI-driven decisions that are biased and breach human rights.
  • Operational Risks: Relying on AI without proper oversight can lead to operational inefficiencies, errors, and potential harm to clients.
  • Cultural Risks: AI data may not be sufficiently responsive to diverse cultural contexts and needs of different communities. Without proper AI policies and guidance, AI use risks undermining important cultural practices and values, particularly those protected by Te Tiriti o Waitangi.

The Importance of an AI Policy

An AI policy is basically the starter or minimum for a workplace to address some of these risks:

  • Ensuring Ethical Use of AI: An AI policy helps ensure that AI tools are used ethically and responsibly. This is crucial in social, community and health services, where decisions made by AI can significantly impact individuals’ lives and well-being.
  • Protecting Client Privacy: An AI policy guides how staff should use AI in alignment with the Privacy Act 2020 and privacy policies. This is particularly important for social, health and community services dealing with highly sensitive and confidential data.
  • Maintaining Accountability: Clear guidelines within an AI policy guide staff on how they may use AI in their decisions and their duty of reasonable care. This is particularly important in health and social services, where transparency and trust are paramount.
  • Preventing Discrimination: An AI policy will include checks that staff must do on AI generated data before relying on it and prohibitions against reliance on unbiased and unverified data.
  • Honoring Te Tiriti o Waitangi: AI policies must recognise and protect Treaty of Waitangi rights. This includes ensuring that AI use does not disadvantage iwi and whānau Māori that health and community services work with and that data sovereignty and cultural considerations are respected.

Strategies to support an AI Policy

An AI policy is just the beginning for a workplace wanting to use AI.  Like any policy, your AI policy needs to be backed up by a strong implementation strategy that includes the following

  • Regular Audits and Assessments: Conduct regular audits of AI systems to ensure they operate as intended and comply with ethical standards.
  • Training and Awareness: Provide training for staff on the responsible use of AI and raising awareness about potential risks and ethical considerations.
  • Bias Mitigation Strategies: Implement strategies to identify and reduce biases in AI systems eg data checking, surveys and if affordable, bias detection algorithms.
  • Robust Security Measures: Apply strong cybersecurity protocols to protect AI systems from threats and ensure the integrity of data.
  • Transparent Decision-Making: Ensure through training and policy that staff responsibilities for AI use are clearly articulated, and AI-driven decisions are transparent and explainable.
  • Cultural Safety and the Treaty:  Use strategies like training, bias detection systems and iwi/community consultation to ensure that the rights of tangata whenua under the Te Tiriti o Waitangi are respected and protected with AI use.

Conclusion

AI brings benefits as well as risks especially for the social, community and health services we work with. To get the most out of AI and help protect against the risks, an AI policy is a “must.” It’s arguably the beginning of a new policy era when, in response to rapidly evolving technology, we need to revise and evolve policies at an equally fast pace.

AI Policy and Procedures – issues for social and health services

picture of man struggling to reinforce need for AI policy and procedures

As Artificial Intelligence (AI) rapidly evolves, health and social service organisations need AI policy and procedures to guide their ethical and responsible use of AI.

We derive many benefits from AI. In the social service and health sectors,  these include support with decision-making and diagnostics, efficiency gains, improved record management and evidence-based practice.

But there are ethical risks of AI use, which vary across different AI applications.

If not addressed in organisational policies and procedures, these risks could threaten the heart and soul of social and health services. We therefore need AI fit-for-purpose policies and procedures to guide us.

AI policy and procedures

Key issues to think about and canvas in your AI policy include data management, transparency, roles and responsibilities, misinformation and legal and regulatory compliance.

Data management

Used in health and other service areas, AI systems may collect a large amount of highly sensitive information about a person. This could be misused or used for non-consented or malicious purposes.  The security of information collected and used through AI, must therefore be addressed in policies and procedures. Likewise, access to and sharing of the information that is collected.

n Aotearoa/NZ the Privacy Act 2020 and other regulatory standards like the Health Information Privacy Code 2020 must be complied with.

Transparency in your AI policy 

If we’re using generative AI for advice, diagnosis and to help make decisions that affect people’s lives, then it’s important that we understand the basis of the advice and information it provides.

The right to give or refuse informed consent is integral to quality care and service. We also want to provide person-centred care.  Before relying on AI to help us with service provision, we should therefore understand and be able to explain to those we serve the criteria and information on which the AI is based.  This should reflect in our policies and procedures.

Other issues of transparency to address in your AI policy concern responsibilities for using AI, how the organisation uses AI in its services and activities and the associated risks.

Roles and Responsibilities

AI isn’t for everyone. It’s unrealistic to expect everyone to have a good grasp of it in your organisation. However, with AI likely to play an increasingly important role in your organisation, it’s important to think about AI responsibilities and to include these in your AI policy.  Key responsibilities to cover include AI policy review and AI risk management.

Misinformation safeguards in your AI policy and procedures

As we already know, it is becoming harder to distinguish fact from fiction as the technology to generate misinformation so rapidly evolves.

The risks are potentially disastrous. They range from reputational damage and loss of trust in person and organisation through to grievous harm when mis-information is relied on as fact.

AI policy addressing data security is crucial. But it’s not enough.

If we’re going to rely on generative AI, we need safeguards in our AI policy for AI output to be checked and verified.

The strategies will vary, depending on the nature of the AI information, but can include:

  • monitoring and tracking to see that AI achieved its intended purpose
  • monitoring error rates
  • review of AI-generated advice by subject matter experts
  • checking AI information against other sources known to be reliable and credible
  • monitoring and evaluating feedback from clients about the impacts of AI

Without these checks, your organisation becomes highly vulnerable to the serious impacts of misinformation.

Keeping your AI policy and procedures fit for purpose

AI technology evolves quickly, which makes it challenging to keep our policies and procedures fit for purpose.

Social and health services are required to keep their polices regularly reviewed and updated. At the Policy Place we support our members through regular reviews and updates of their policies based on a two or three year cycle.

However, with your AI policy and procedures, it may be necessary to prescribe more frequent reviews and updates.

Conclusion

Given the rapid rate of change, we could easily think we should just wait and see before we launch into an AI policy. But don’t be fooled. There’s nothing to wait for.

We might not get it right the first time or even the second time with our policy. But we can change and evolve our policies and procedures as we gain more understanding of AI system and risks.