HTI submission on mandatory guardrails for AI
The path to regulating artificial intelligence (AI) is not straight or narrow. Complex social and technological challenges necessarily require careful, deliberative reform processes. We have watched this play out over the past few years in the European Union with the development of the EU AI Act, with other jurisdictions like Canada and the UK now similarly working towards regulation for AI.
Here in Australia, the Albanese Government is also pressing forward, acknowledging the need to introduce legislation for AI in a way that ‘builds community trust and promotes innovation and adoption while balancing critical social and economic policy goals’.
Last month, the Government released a Proposals Paper for introducing mandatory guardrails for AI in high-risk settings, outlining possible regulatory approaches to three critical issues:
a definition of what is meant by ‘high-risk AI’, including proposed principles to guide the assessment of what may, or may not, be ‘high-risk’
mandatory guardrails to increase the safety and accountability of high-risk AI
the legislative options for mandating the guardrails.
These proposed reforms are intended to complement other reforms that the Government is undertaking, including in privacy law, copyright, automated decision making and digital platforms.
In its submission to the Department of Industry, Science and Resources consultation on the Proposals Paper, HTI makes a number of recommendations to ensure that any AI legislation targeting high-risk AI in Australia should:
set out an objective to protect people from harm, and to support innovation for economic benefit and societal wellbeing
take a principles-based approach to defining ‘high risk,’ grounded in Australia’s obligations under international human rights law
avoid broad exemptions for defence and national security bodies
recognise that some AI technologies may pose unacceptable risks of harm, and may therefore need to be prohibited.
set clear guardrail requirements for both developers and deployers to improve risk mitigation
include additional requirements for engaging with stakeholders and for safely decommissioning high-risk AI systems
include a rebuttable presumption for legal liability
provide for enforcement through mechanisms including regulator oversight and a ‘piggy-back’ provision
support organisations to comply with the guardrails through a number of measures.
These recommendations are intended to support regulation for high-risk AI that will incentivise responsible, human-centred AI development and deployment in Australia. While the path toward AI reform might not be straight or narrow, it must be taken – and submissions to this Proposal Paper reflect an important milestone along the way.