Invisible Bystanders: Workers’ experience of AI
New research from the UTS Human Technology Institute reveals that workers are being treated as “invisible bystanders” regarding artificial intelligence, leaving organisations, employees and the public at risk of missed opportunities and significant harms.
Workers' experience of AI and automation
From February to April 2024, HTI and Essential undertook innovative qualitative research to explore the experience of Australian workers regarding AI and automation. To complement industry's recent focus on the impact of AI on employees in the IT sector, HTI and Essential engaged with nurses, retail workers, and public servants.
HTI co-director Professor Nicholas Davis said that the research is the first of its kind in which workers are taken on a reflective journey around AI, producing findings that support economists' latest modelling of the economic challenges posed by automation-focused technologies.
The research finds workers are not opposed to AI. In fact, they see opportunities for improving many parts of their work, especially around reducing the burden of paperwork and compliance. -Professor Nicholas Davis
But the study also shows that workers – when provided with the right tools to reflect and discuss – are both nuanced and expert when it comes to how AI tools can be used productively and responsibly. -Professor Nicholas Davis
Participants expressed significant concerns about the impact of AI on their work:
the impact of automated decisions on patient care in nursing such as the dispensing of drugs and triage diagnosis.
resistance amongst public servants, where trust and licence has been undermined by Robodebt.
a fundamental change in work for retail workers in stores where automated checkouts have been implemented.
The key findings from this research are:
Workers are ‘invisible bystanders’ in relation to the adoption of AI and automation into their work lives as they are not being consulted regarding the development, training or deployment of these systems.
Workers initially had a low understanding of AI, low awareness of how AI was being deployed in their industry, and low trust that it will be implemented in the interests of workers and the customers, patients, or the public that they serve.
However, by engaging with workers on these issues, they were able provide valuable and nuanced insights into many ethical, legal, and operational issues raised by these systems.
Workers are not inherently opposed to the adoption of AI and see the benefits and opportunities for AI to improve systems, reduce menial tasks and complement human intelligence and labour.
By failing to engage with workers, organisations are unable to benefit from the expertise of their workers, particularly their insights into the harms and benefits of AI systems. This will lead to poorer adoption and implementation of AI systems, creating increased governance risks for organisations.
The report suggests a range of policy opportunities to address these issues:
establishing of industry-wide AI works council to embed the worker voice into the development and deployment of these technologies
imposing a general duty of care on organisations equivalent to workplace safety obligations
limits on how and why workers are subject to surveillance
establishing industrial guardrails, such as nurse: patient ratios, to ensure that technology improves rather than replaces workers.