AI Governance Lighthouse Case Study Series: UTS
HTI’s Lighthouse Case Study Series shines a light on organisations that are actively exploring different approaches to human-centred AI governance. In the third case study, HTI presents insights from UTS.
UTS AI governance and stakeholder engagement
The stakeholder engagement undertaken by University of Technology Sydney (UTS) and the subsequent development and implementation of its artificial intelligence (AI) policies, procedures and governance are explored in HTI’s third Lighthouse Case Study.
UTS places students at the heart of the learning experience. It is committed to student engagement and contribution to decision making at UTS. Driven by a desire to build trust with both students and staff that AI is being, and will be, used responsibly, a key focus of UTS’s approach to AI governance has been stakeholder engagement. UTS undertook detailed consultation processes with students and staff to identify the principles that should govern the use of AI and analytics at UTS, as well as how new technologies, like generative AI and predictive AI tools, can be used responsibly.
Building on the lessons learnt and insights gathered, UTS invested time and effort in creating a clear and transparent AI Operations Policy and Procedure to guide the use, procurement, development and management of AI at UTS. It also created the AI Operations Board to oversee the use of AI at UTS and implementation of this policy and procedure.
“Stakeholder engagement is a challenging area of AI governance for many organisations. Yet, it provides a powerful opportunity to learn what people really want from AI systems. UTS has listened to and heard its students and staff in deciding how to use and govern AI systems,” said HTI Co-Director and Industry Professor Nicholas Davis.
This case study explores how UTS is using AI, the role of stakeholder engagement, and the policies, procedures and governance structures developed to manage the risks and secure the benefits of AI systems. The key governance insights include:
Stakeholder engagement is an important and valuable process that can improve policies, build stakeholder trust, and increase the confidence of decision-makers in relation to the adoption and use of AI systems.
Putting in place clear policies and procedures to govern the adoption and use of AI systems allows for better and easier decision-making by senior leadership.
Successful AI governance requires an interdisciplinary approach, and it is essential that staff with different backgrounds and expertise have an opportunity to come together to discuss issues and approve solutions.
In April 2024, HTI launched its Lighthouse Case Study Series, as part of its AI Corporate Governance Program (AICGP), to highlight the insights generated and challenges faced by organisations on the frontier of human-centred AI development and deployment.
This is the third case study in the series. The first case study featured Telstra, highlighting their approach to AI governance, particularly the governance structure and complementary investments that support the development, use and oversight of AI systems within that organisation. The second case study featured KPMG Australia and its development, use and governance of its an internal Generative AI agent, KymChat.