What kinds of research do we need to humanise AI futures?
On Friday, the UTS Data and AI Ethics Research Cluster produced a symposium to present research aimed at “Humanising AI Futures” at UTS. We’ve been thinking as a network about what UTS should do to support research in the social and ethical implications of data and AI technologies and the symposium was aimed at sparking further conversations within the university community about this.
Professor Heather Horst from the ARC Centre of Excellence for Automated Decision Making and Society delivered an inspiring keynote with examples from her research on the impact of data and AI technology in multiple countries.
She highlighted that the context (including its politics, history and cultures) in which technology is deployed is vital to determine the outcomes of that technology.
It is not inevitable that data and AI technologies lead to wealth extraction or alienation or increased inequalities. The futures of data and AI technologies are being determined as speak – as technologies are being adopted, developed, extended and rejected in particular contexts around the world.
Twelve speakers from the Data and AI Research Cluster representing eight research groups and five faculties and schools presented current research projects in three panels.
These groups focused on three distinct challenges around data and AI;
understanding
governing
reimagining
As the symposium concluded, I remarked how extraordinary it was that lawyers, engineers, creative practitioners, anthropologists, social scientists, computer scientists, humanists, digital humanists, designers, literary scholars, historians could agree on so much.
Ethical AI
It is clear that there are two processes that require research support in the area of ethical AI.
The first is in the period before technology is deployed: we need research that helps engineers understand the (social, political, historical and cultural) contexts in which technology will be deployed so that we can a) build technology that fits well into the ways that people live and work and b) mitigate against any unintended consequences by anticipating the impact of that technology on current policy and practice.
The second is in the period after technology is deployed: we need research that examines how people are using (or not using) technology in situ and for that knowledge to be fed back to engineers and legislators to improve technology (and sometimes to abandon it completely).
In both cases, research requires meaningful conversations between researchers and between stakeholders and researchers. Ethical AI, in other words, is determined by the vitality and comprehensiveness of dialogue about it. Finding the best method for facilitating those conversations, I’m starting to realise, is one of the most valid areas of research to support ethical AI, and it is in this area that I think UTS is really excelling.
Across multiple groups, we work to facilitate conversations between stakeholders, such as;
The Human Technology Institute with corporate Australia
The Disability Research Network with people with disability
The Connected Intelligence Centre and the Centre for Research on Education in a Digital Society with students and educators
The Centre for Media in Transition with journalists and media policy makers
The Data Science Institute with human resources services and job seekers.
These conversations lead to the development of participatory, democratic AI tools, processes and policies.
We design, re-design and re-think how oaths, guidelines, principles, sensitising questions, participatory models, ethical edge cases, and critical questions might be used to develop humanistic AI. We create channels that stakeholders can use to have conversations about ethical AI in the future. What seemed to be disagreement on whether AI principles or guidelines work, for example, was a demonstration on how we are able to evaluate these processes in context and towards the public interest.
AI in the public interest
Another area where UTS excels is in the way that we practice data analytics towards public interest goals. Excellent research being conducted by researchers in the faculty of Design, Architecture & Building for example, shows how experimentation with AI image generators can creatively reflect how the tools work and in whose interest.
Work in the Faculty of Arts and Social Sciences shows how data technologies can be used to surface inequalities in supposedly representative data and produce reflexive tools that can be used by data workers in their daily practice to improve the quality of hidden data.
The Connected Intelligence Centre is working on analytics that can improve student retention but doing so within a larger ethical framework that reflects multiple stakeholders’ needs.
Humanising AI futures is, in short, about building better ways to talk about humans’ needs in relation to technological affordances and about getting our hands dirty by using current tools to make data that surprises, that astounds, that complicates the ways in which we think about that data.
This short symposium really helped to set the stage for the kinds of cross-disciplinary work that can not only mitigate against the risks of AI, as our Deputy Vice-Chancellor (Research), Professor Kate McGrath articulated in her welcome, but also to develop technological futures that are better engineered, managed and governed for all stakeholders.