The Government has announced new AI standards for businesses

Share
The Federal Government has announced a new set of voluntary AI standards for businesses wanting to use the new technology.
Ed Husic (Government Minister) has announced new AI standards

The Federal Government has unveiled its plan to regulate and introduce standards on the use of artificial intelligence (AI).

The 10 new “guardrails” for organisations and businesses include cybersecurity and transparency standards.

The plan is based on overseas models, including standards used in the European Union and Japan.

The new AI standards aren’t compulsory, but the Government is considering making them mandatory.

Here’s what you need to know.

Context

AI technology can automate processes and break down large datasets. It’s used to simplify research and generate content on platforms like ChatGPT.

As a relatively new field of technology, there are no formal rules governing AI usage in Australia.

Internationally, governments have grappled with how to use and control rapidly growing AI technologies.

This week, the Government has released a new paper on “mandatory guardrails” for AI following calls for stronger regulation.

Regulating AI

The report – from the Department of Industry, Science, and Resources – found AI can help drive productivity and economic growth.

However, it identified a need for more regulation to prevent harm from AI. For example, deepfake porn or the use of algorithms in recruitment to vet CVs “to create bias based on a person’s race, gender or age.”

The Government hopes a new set of AI guardrails will “ensure Australians enjoy the rewards of AI while managing the risks.”

Guardrails

The Government said, “only 29% of businesses are implementing AI safely and responsibly.”

Its new guardrails are a set of standards on how businesses and organisations can use AI safely and responsibly. They are:

  1. Have an AI strategy
  2. Identify the risks of using AI
  3. Have Cybersecurity to protect AI data
  4. Test AI models before using them professionally
  5. Human control – make sure AI isn’t in charge of AI
  6. Informing others when using AI – labelling AI content, telling clients when it’s used in businesses (e.g. chatbots)
  7. Tell AI developers when there are issues with their products
  8. Sharing data with other organisations about best-practice AI use
  9. Record-keeping – showing compliance with guardrails.
  10. Alerting AI developers when there is “bias” detected.

AI bias

The Government said its strategy includes measures to “stop AI-enabled disinformation campaigns that could jeopardise our democracy.”

It also identified facial recognition technology (FRT) as a possible area of concern.

According to a recent Cambridge University study, FRT can contribute to racial discrimination because its data “does not include sufficient representation”.

Government remarks

Industry and Science Minister Ed Husic said the guardrails will help businesses calling for better regulation of AI.

“Australians know AI can do great things but people want to know there are protections in place if things go off the rails,” he said.

The Government is considering legislation to make AI guardrails mandatory, although it’s not clear when that would be.

The Federal Science agency has proposed ways to develop an AI Act, similar to legislation in the EU.

Become smarter in three minutes

Get the daily email that makes reading the news actually enjoyable. Stay informed, for free.

Be the smart friend in your group chat

Join thousands of young Aussies and get our 5 min daily newsletter on what matters in your world.

It’s easy. It’s trustworthy. It’s free.