[widget id="surstudio-translator-revolution-3"]

Government publishes Consultation on the Mandatory Guardrails for AI in high-risk settings

17 September 2024
Sinead Lynch, Partner, Sydney

In a busy period for Government on technology and data law reforms, in conjunction with its ongoing reviews of AI technology and systems, and complementing the Voluntary AI Safety Standard unveiled by Minister Ed Husic, the Federal Government recently released its ‘Proposals Paper for introducing mandatory guardrails for AI in high-risk settings’ (Paper) as an initial step towards Australia’s regulation of AI.

The Paper follows on from the Government’s interim response to the ‘Safe and Responsible AI in Australia discussion paper’, wherein they committed to developing a regulatory environment ‘with community trust and AI promotion in mind’.

A principles risk-based approach is proposed in the Paper (with a focus on ex ante (preventative) measures balanced by ex post (remedial) measures). Broadly, the Paper proposes a high-level definition of ‘high-risk’ AI by reference to a set of ‘proposed principles’, and the application of these principles to ‘general purpose AI’ (GPAI) models. A short consultation period only is proposed and interested submissions are requested from industry on or before 5.00pm AEST on Friday 4 October 2024.

The Paper is informed by the workings of the AI Expert Group (a 12-person expert panel appointed by the Federal Government earlier this year to guide the introduction of high-risk AI ‘guardrails’), with an eye on how international governments and regulators around the world are tackling the harms arising from AI technology. A specific set of ten mandatory guardrails have been proposed to mitigate harm arising out of the use and development of AI systems, including anticipated protections on accountability, governance, transparency, implementation of risk management processes and an acknowledgment that human ‘in the loop’ interventions are key.

What is curious is that the word ‘ethics’ does not expressly appear in these ten listed guardrails, although one might argue that the focus on transparency and the requirement to build processes to challenge outcomes may be a nod in an ethical direction. That said, it is somewhat disappointing that ethical considerations were not expressly called out as one of the guardrails, particularly noting the focus on high-risk AI.

The Consultation seeks feedback on the best way to transpose these principles and guardrails into mandatory legislation, with three options offered:

Option 1.Implement through changes to existing regulations (e.g. ACL, Privacy Act, ASIC Act etc);
Option 2.Introduce a new principles-based framework, which ‘trickles down’ and is enforced under existing laws;
Option 3.Take a whole-of-economy approach and introduce a new specific stand-alone legislation (e.g. an ‘AI Act’).

Current indications are that Option 2 may be the preferred route for many (already heavily) regulated sectors, but we wait to see the outcomes.


What are the main areas of focus?

Proposed definition of ‘high risk AI’

A principles-based approach is being taken. The Paper is seeking feedback on a set of ‘Proposed Principles’ for determining high-risk AI settings, and the potential application of these Proposed Principles to ‘general purpose AI’ (GPAI) models.

The ‘Proposed Principles’ include that, in designating an AI system as high-risk due to its use, regard must be given to the risks, as well as the severity and extent, of adverse impacts.

Adverse impacts to the below should be considered:
An individual’s human rights (as recognised in Australian human rights law without justification and international human rights law obligations);
An individual’s physical or mental health or safety;
Adverse legal effects, defamation or similarly significant effects on an individual;
Adverse impacts to groups of individuals or collective rights of cultural groups; and
To the broader Australian economy, society, environment and rule of law.

Ten proposed ‘mandatory guardrails’

To reduce the likelihood of harms occurring from the development and use of high-risk AI systems, the following 10 ‘mandatory guardrails’ are proposed, in summary:

Guardrail 1
Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
Guardrail 2
Establish and implement a risk management process to identify and mitigate risks.
Guardrail 3
Protect AI systems, and implement data governance measures to manage data quality and provenance.
Guardrail 4
Test AI models and systems to evaluate model performance and monitor the system once deployed.
Guardrail 5
Enable human control or intervention in an AI system to achieve meaningful human oversight.
Guardrail 6
Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
Guardrail 7
Establish processes for people impacted by AI systems to challenge use or outcomes.
Guardrail 8
Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
Guardrail 9
Keep and maintain records to allow third parties to assess compliance with guardrails.
Guardrail 10
Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

Three regulatory options are proposed in order to mandate these 10 ‘mandatory guardrails’:

Option 1Option 2Option 3
Adopt within our existing regulatory frameworks (and enforced by current applicable regulators).Introduce new framework legislation (top-level) which would ‘trickle down into’ existing laws, with associated amendments to existing legislation.Introduce a whole economy approach by introducing a cross-economy AI specific law (such as an Australian AI Act), to be regulated and governed by a new/specific AI regulator.

Privacy Act reforms – automated decision-making

Worth mentioning alongside these proposed changes in the regulation of high-risk AI, the Privacy and Other Legislation Amendment Bill 2024 (Bill) was introduced at the First Reading Stage to the House of Representatives as of 12 September 2024. In addition to other changes to the Privacy Act, the Bill proposes detailed reforms on automated decision-making (ADM) using personal information, such that, if enacted as proposed, entities that are using AI technology (encompassing ADM) – not only high-risk AI systems – would need to ensure transparency to individuals about:

  • the kinds of personal information used in the operation of ADMs,
  • the kinds of decisions made solely by ADMs; and
  • the kinds of decisions for which a thing, that is substantially and directly related to decision making, is completed by the operation of ADMs.

Further information on ADM and these proposed new ADM requirements are set out in our separate Privacy Act Reforms Insight here.

Next steps on ‘high-risk’ AI consultation

The Paper includes various consultation questions on each of the proposals. This Consultation is open for four (4) weeks, closing at 5pm AEST on Friday 4 October 2024. Examples of questions include:

  • Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove?
  • Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?
  • Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately? For example, are the requirements assigned to developers and deployers appropriate?
  • Which legislative option do you feel will best address the use of AI in high-risk settings? What opportunities should the government take into account in considering each approach?

For more information on the Proposals Paper, click here. If you would like to make a submission, or contribute to a submission being made, or for any additional information on any of these proposals, please do reach out and contact any member of our team.

If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.


Authored by:

Sinead Lynch, Partner
Lucy Hardyman, Lawyer
Wen Wong, Lawyer

This update does not constitute legal advice and should not be relied upon as such. It is intended only to provide a summary and general overview on matters of interest and it is not intended to be comprehensive. You should seek legal or other professional advice before acting or relying on any of the content.

Get in touch