AI Leadership Summit Workshop: Impact of AI on Workers
At last week’s AI Leadership Summit, HTI’s AI Governance team facilitated an AI Explorer Workshop examining the impact of AI on workers, the importance of engaging deeply with stakeholders to harness AI’s full potential, and the people, skills and culture needed to support the safe and responsible use of AI in organisations.
On Tuesday 22 October 2024, HTI’s Gaby Carney, Llewellyn Spink, and Myfanwy Wallwork, led a dynamic half-day workshop in Melbourne exploring the critical role of workforce engagement, and providing practical advice on the roles and skills required for responsible AI adoption. This session was one of five workshops held during the second day of Australia's AI Leadership Summit, organised by CEDA and the National AI Centre (NAIC).
Participants came from diverse sectors, including industry, government, civil society and academia. Their common goal was to gain practical insights into how to meaningfully engage with workers, cultivate necessary AI skills and culture, and implement effective governance to manage the transformational impact of AI.
Throughout the session, attendees shared insights into how their organisations are:
- balancing robust governance policies and technical controls with workers’ enthusiasm for AI tools
- empowering employees to explore AI through experimentation, while managing the risks of ‘shadow AI’
- leveraging informal governance processes to provide early advice and feedback on AI use cases, and
- adopting a bottom-up approach to share successful generative AI applications that drive productivity.
In response to the question, "What do you need to create a culture of safe and responsible AI?" the top themes that emerged were governance, trust, and training, as seen in the below word cloud.
At the workshop, HTI’s team shared key findings from recent and upcoming research:
- Myfanwy emphasized how responsible and safe use of AI can increase trust, mitigate harms, and maximise benefits. Drawing on the Australian Responsible AI Index 2024 by NAIC and Fifth Quadrant, she highlighted the gap between AI ethics principles and their practical implementation and how AI governance should reflect the specific characteristics of AI systems.
- Llewellyn presented insights from HTI’s innovative qualitative research, Invisible Bystanders: how Australian workers experience the uptake of AI and automation. He explained why workers feel like ‘invisible bystanders’ in AI adoption and how organisations can engage with them to unlock the productivity promises of AI. The University of Technology Sydney’s engagement with staff and students on AI was highlighted as a leading example (see HTI’s AI Governance Lighthouse Case Study: UTS)
- Gaby discussed why people, skills and culture are an essential element of AI governance. She highlighted the roles needed for best practice AI governance, why a Chief AI Officer can be a critical resource, and the importance of organisational culture in supporting formal governance frameworks for safe and responsible use of AI.
HTI will soon release its latest AI Governance snapshot, which will explore these topics further, emphasising that strategic investment in people, skills and culture is essential for fostering safe and responsible AI adoption.