Human rights and artificial intelligence
Industry Partner
-
Australian Human Rights Commission (AHRC)
Research centres
-
Centre for Social Justice and Inclusion
-
TD School
-
UTS Design Innovation Research Centre
-
Human Technology Institute
Funding source
-
Murdoch University funded the project through its innovation partnership with Aurecon
Project dates
-
2018–2021
- Posted on 15 Jan 2025
- 3-minute read
How a transdisciplinary approach led to privacy, technology and human rights reforms.
As artificial intelligence (AI) becomes ever more prominent, it’s essential to put robust human rights safeguards in place.
Key take aways
- Taking a transdisciplinary approach, UTS worked with the Commission to develop a robust understanding of the human rights implications of AI and related new and emerging technologies.
- Supported by UTS’s deep technical expertise, the Commission recommended modernising Australia’s regulatory system to extend to the use of AI.
- A number of the project’s key recommendations are currently being adopted and implemented by government.
AI is enabling significant advances in technology and rapid improvements in our lives – but it also presents real and profound threats. What are the potential harms in autonomous decision-making and algorithmic bias?
How can we leverage the benefits of new and emerging technologies while guarding against the risks they pose to human rights?
To answer these questions, the Australian Human Rights Commission partnered with UTS on an ambitious, three-year project.
The Technology and Human Rights project
Partnering with UTS on this groundbreaking project gave the Commission access to:
- Specialist expertise in new and emerging technologies
- The university’s proven capacity for transdisciplinary thinking.
While the Commission had extensive experience in human rights, it lacked the deep technical expertise in new technology and AI
The project succeeded in building the Commission’s understanding of AI and its implications on human rights, exploring the risks and opportunities of emerging technologies.
Together the project partners then worked to communicate these implications to policymakers, legislators and the public.
Culminating in a final report to government, the project made a set of recommendations and proposed policy reforms around the regulation of AI and stronger laws against the misuse of technology like facial recognition.
A truly transdisciplinary approach
Regulation for new technology is a complex, multi-dimensional problem. To effectively explore the challenges presented by AI, the Commission needed a holistic, transdisciplinary approach.
UTS was instrumental in this, providing a methodology for bringing different disciplines and perspectives together to approach the problem in a collaborative manner.
It’s a classic case of a complex challenge with significant multi-dimensionality. It requires different sorts of thinking and different skill sets
Key to this approach is thinking beyond individual faculties and departments.
To respond to an issues paper put out by the Commission, UTS’s Centre for Social Justice and Inclusion provided a space for academics from across the university to come together and contribute.
“The world has problems that cross [disciplinary] borders,” says Nicole Vincent, submission co-lead and senior lecturer in the TD School.
The resulting submission was a truly whole-of-university response, integrating multiple perspectives rather than relying on expertise from any single discipline.
No technology exists in isolation from other technologies, or from the political, economic, social, legal and environmental context.
Another mark of UTS’s approach is to put the project ahead of organisational pride.
This involved looking beyond the university’s four walls to create a forum for senior government, civil society, industry and academic representatives to collaborate. Participants worked together in a set of workshops to explore issues and develop a human rights approach to emerging technologies.
The partnership with UTS allowed us to bring disparate groups together, creating a welcoming and inclusive environment, not just for AHRC staff but academics and experts from other institutions.
This transdisciplinary approach was complemented by UTS’s technical expertise, enabling the Commission to go deep on issues such as algorithmic bias.
Ongoing impact
The partnership with UTS was essential for shaping the Commission’s ideas and its proposals to government.
In addition to providing a submission in response to the issues paper, UTS supported the Commission to develop its final report. As a major partner, UTS provided peer review and advice on drafts, with then-Executive Director of Social Justice at UTS, Verity Firth, sitting on the expert reference group.
"UTS’s collaborative approach enabled the Commission to engage thought leaders in human rights and technology across the country to co-design solutions."
The Commission’s final recommendations are at the heart of a number of policy reforms government is currently making, including:
Updating The Privacy Act to require transparency around automated decision-making
Establishing an artificial intelligence expert group
Putting mandatory safeguards in place for high-risk AI use.
Additional outcomes
Through the partnership with AHRC, UTS has contributed to frameworks that promote human rights, working across disciplines to ensure values are integrated into technology design and giving the community a strong voice in decision-making.
UTS is also leading a human rights masterclass with NSW Government staff and designing capability courses to be delivered in Indonesia, Malaysia, Vietnam and Thailand on behalf of the Department of Foreign Affairs and Trade (DFAT).
These courses are targeted at helping local departments in those countries discuss issues around ethical and responsible AI.
Ed Santow, the Commissioner at the time of the project, has since joined UTS to co-direct the Human Technology Institute.