In May 2024, the Human Technology Institute released innovative qualitative research into the experience of Australian workers of AI and automation. The study revealed that workers are being excluded in discussions around AI development and deployment, leaving organisations without their expert insights and exposed to additional governance risks.
Invisible Bystanders: Workers’ experience of AI and automation
From February to April 2024, HTI and Essential Research conducted a series of deliberative engagements with workers, including in-depth interviews, online journal entries over two weeks, and focus group discussions. The workers were drawn from three key industries: nursing, retail, and the Australian Public Service. Combined, these industries represent over a quarter of the Australian workforce.
Key findings
Workers are ‘invisible bystanders’ in relation to the adoption of AI and automation into their work lives as they are not being consulted regarding the development, training or deployment of these systems.
Workers initially had a low understanding of AI, low awareness of how AI was being deployed in their industry, and low trust that AI systems would be implemented in the interests of workers and the customers, patients, or the general public.
However, after being exposed to information and engaged in discussions, workers were able to provide extremely valuable and nuanced insights into the legal, ethical, and operational issues raised by these systems.
Workers are not inherently opposed to the adoption of AI and see the benefits and opportunities for AI to improve systems, reduce menial tasks, and complement human intelligence and labour.
By failing to engage with workers, organisations are losing the benefit of deep worker expertise, including insights into opportunities for higher productivity, ethical boundaries and the broader impact of AI systems on colleagues and customers. This increases the risk that companies will adopt AI solutions which automate processes without augmenting workers. This could simultaneously harm workers, investors and the public.
The report suggests that boards, senior executives and policy makers explore a range of strategies to address these issues:
establishing industry-wide AI works councils, to embed worker voices into the development and deployment of these technologies
creating a general duty of care on organisations, equivalent to workplace safety obligations
law reform that clarifies limits on how and why workers are subject to surveillance by AI systems
establishing industrial guardrails, such as nurse-to-patient ratios, to ensure that technology improves rather than replaces workers.