Australia must seize the momentum for AI reform
HTI welcomes the Australian Government’s release of a Proposals Paper to introduce new legal requirements for artificial intelligence (AI) in high-risk settings, and a new Voluntary AI Safety Standard for Australian businesses.
On the Government’s Safe and Responsible AI Proposals Paper
The Human Technology Institute has engaged with stakeholders from civil society and industry extensively over the last two years. Australians see the many opportunities presented by AI, but the community needs more effective legal protections to address the real risk of harm.
The Australian Government has committed to reform that fills gaps in existing law, by adopting a risk-based approach for the development and deployment of AI. A risk-based approach means that the greater the risk of harm posed by AI, the stricter will be the legal requirements.
“A risk-based approach to AI reform would bring Australia into line with other jurisdictions, such as the European Union and Canada. But Australia has been slow to act, and the Government should commit to introducing legislation by 2025 at the latest,” said Professor Edward Santow, Co-Director of the Human Technology Institute at UTS.
While reform is overdue, regulators should do more now to enforce the laws we already have. Our existing anti-discrimination, consumer protection and other laws apply to the use of AI just as they do to all other technologies. Those existing laws need to be enforced and applied more effectively. - Professor Edward Santow.
On the Government’s Voluntary AI Safety Standard
The publication today of the Government’s Voluntary AI Safety Standard (the Voluntary Standard) is a key milestone in the Australian Government’s developing approach to safe and responsible AI. The Human Technology Institute, with support from KPMG and Gilbert+Tobin, was pleased to partner with the National AI Centre, Gradient Institute and Data 61 to contribute to the Standard.
Aligned with existing international approaches on AI governance and current legal requirements, the Voluntary Standard supports organisations to incorporate principles of AI governance into existing policies, procedures and processes. It provides practical guidance that can be used by Australian businesses to unlock the potential of AI, while minimising the risk of harm to their customers, users and the wider community.
In its work on the Voluntary Standard, HTI’s primary focus was to underpin its ten guardrails with a human-centred approach. This means prioritising the safety of people and protection of their human rights; upholding principles of diversity, inclusion and fairness; incorporating human-centered design and ensuring the system is trusted by users and the wider community.
HTI Co-Director and Industry Professor Nicholas Davis today welcomed the publication of the Voluntary Standard, saying, “HTI has consistently heard from corporate leaders that they want guidance on how to address the risks posed by AI deployment. For Australians to get the societal and economic benefit AI promises, businesses need to be confident they can successfully innovate without causing harm.
Starting today, organisations can work through the Voluntary Standard’s ten guardrails to identify and manage both the benefits and risks of AI systems. This is an important tool for businesses of any size to evaluate and communicate the trustworthiness of AI systems they use, rely on or offer. - Professor Nicholas Davis.
Read the Voluntary AI Safety Standard