Dr Ruby Manukia Schaumkel by .

Artificial Intelligence (AI) in New Zealand

AI is covered by existing laws, including the Privacy Act, the Human Rights Act, the Fair Trading Act, the Harmful Digital Communications Act, among others, and obligations under te Tiriti o Waitangi apply.

In New Zealand there is no AI specific regulation or legislation, and internal MBIE documents say there are "no rules or guidelines for all government agencies about staff use of AI tools."

AI Law

Artificial Intelligence Law (AI Law) regulates the development, deployment, and use of AI. While it is easy to mistake AI law for lawyers using AI or how it impacts the legal profession, AI law really refers to how the law applies to AI.

With world-leading researchers and practitioners, New Zealand is playing an important role in the global AI community. There have been significant achievements in AI in recent years.

What is AI for the environment in New Zealand?

AI for the Environment in New Zealand is focused on key environmental outcomes where AI can deliver meaningful solutions such as: preserving and bolstering biodiversity, understanding the impacts of changing land use, and reducing pollution from our activities.

Regulating AI is a challenge

AI regulation is a hot topic internationally. Leaders of some of the big players in the AI world are signing open letters and talking to politicians and journalists about the need to regulate AI to prevent serious harms and consequences of AI.

Some experts say focusing on hypothetical future harms ignores actual harms already occurring with AI use and detracts from immediate action that can be taken to develop AI safely and maximise its benefits.

Perhaps the greatest existing harm is that biased AI reproduces and entrenches inequalities. There is an abundance of data available to AI developers, but it is of varying quality. When the data used to build an algorithm is not representative, then the resulting AI will have biases and discrimination.

AI’s ability to rapidly analyse large volumes of data also raises privacy concerns. In the facial recognition cases, the harm caused by the poorly performing facial recognition technology was facilitated by a degree of surveillance. Beyond surveillance, a key privacy issue is in the collection and use of data necessary for AI. There is an asymmetry in power and information between individual consumers and the companies using their data, meaning people have little control over the way their data is processed, for example, to prevent the creation of sophisticated profiles to enable powerful targeting of advertising or services for a company’s gain.

Finally, the data processing capabilities of AI creates a risk that data sets that would previously have been considered anonymous because they did not contain any obviously identifying information (like names, phone numbers) can be matched to individuals. Understanding how AI might affect traditional ideas of privacy are necessary to eliminate harm.

How are other countries managing the issue?

The EU is leading the pack on AI regulation, with an AI Act. The AI Act classifies AI applications based on risk, and imposes different rules depending on the risk level – including banning applications deemed to have potentially unacceptable consequences.

In contrast, nowhere else has bespoke, overarching legislation. China has developed individual laws for specific AI applications. Canada has a bill in its parliamentary process. In the US, the White House has released the AI Bill of Rights which outlines principles but has no actual powers. Australia has started a public consultation on how AI should be regulated.

There are also gaps created by new technologies, and creating new laws is not simple. Although there are important gaps in current regulations, there is no universal endorsement around the world for urgently creating new law around AI.

There are some principles that can guide the legislation of AI. Among the most important will be transparency: that people know when AI has been used to make a decision that affects lives, and potentially when they are exposed to AI generated content in marketing. Also crucial will be ensuring that there is accountability built into processes that use AI, so that humans are ultimately responsible for the outcomes produced.

AI has the potential to benefit New Zealand, and balance the benefits without entrenching inequalities and harms. It will require careful and strategic action by policymakers and other stakeholders. Although there are a range of views on the optimal nature and pace of AI regulation, it is clear that strengthening general privacy and data protection obligations is a critical step.

New Zealand, like many countries, has seen growing interest and application of artificial intelligence (AI) across various sectors. An interesting conversation for faith communities considering ethical and societal concerns.

As with many countries, there is an ongoing debate in New Zealand about the ethical implications of AI, particularly concerning data privacy, surveillance, and job displacement. Efforts are being made to ensure AI development adheres to ethical standards and best practices.