Recording: Humanising Technology
Artificial intelligence is transforming our world. It’s revolutionising how governments and companies make decisions.
AI aims to remove human prejudice and produce better, data-driven decisions. But too often, the reality is far from this vision, with horrifying consequences. We've seen algorithms make it harder for women and people of colour to get a home loan or a job. And 'Robodebt' involved a faulty system of government debt collection that pushed thousands of the most vulnerable people in our country into poverty or worse.
In this session, Dr Alondra Nelson (head of the White House Office of Science and Technology Policy) joins Prof Edward Santow and Prof Nick Davis (co-directors of the Human Technology Institute) to discuss how we can ensure human values are at the heart of how new technology is designed, used, and overseen.
PROF. VERITY FIRTH: Hello, everyone who's joining us. We'll just wait for about another 30 seconds while a few more people enter the virtual room and then we'll begin.
All right, we have around 120 participants already in the virtual room and I know that's going to keep climbing, so I will begin today's event because we are really very excited about our special guest that we have with us today.
Before I begin, I'd like to acknowledge that for those of us in Australia, we are all on the traditional lands of First Nations peoples. This land was never ceded, and I want to acknowledge the Gadigal people of the Eora Nation, upon whose ancestral lands the UTS City campus now stands and it's also, of course, where I am joining from today. I want to pay respect to the Elders past and present and acknowledge them as the First Nation owners and ongoing connection to this land, waterways, and culture. They're the traditional custodians of knowledge upon which this university is built. I further acknowledge the traditional owners of the country where all of you are joining us from and pay respect to their Elders.
I'm Professor Verity Firth and I'm the Pro Vice-Chancellor Social Justice & Inclusion at the University of Technology, Sydney where I lead our social impact and engagement. It is my pleasure to be joined today by a world leading expert on the human impact of technology, Dr?Alondra Nelson. She's also the head of the White House's Office of Science and Technology Policy. This webinar will also feature the codirectors of the Human Technology Institute, Professor Ed Santow and Professor Nicholas Davis.
But there is a couple of housekeeping pieces that I need to let you know about first. First, today's event is being live captioned. To view the captions, you click on the "CC", closed caption, button at the bottom of your screen in the Zoom control panel. We're also posting a link in the chat now which will also open captions but in a separate internet window if you would prefer.
If you have any questions during today's event, please type them into the Q&A box, which you'll also find in the Zoom control panel. You can like questions that others have asked and that will push them up the top of the list. But please do try to keep them short and relevant to the topics that we're discussing here today.
Artificial intelligence is transforming our world. It is revolutionising how governments and companies make decisions and AI is increasingly everywhere, from banking, recruitment, law enforcement, to social welfare.
The promise of AI is that it will remove human prejudice and produce better, more datadriven decisions. Sometimes this is true, but too often the reality is far from this vision. We've seen in all the areas I just mentioned how AI can replicate and even worsen existing inequality. The consequences can be horrifying, especially for our human rights. We've seen algorithms make it harder for women and people of colour to get a home loan or a job and in Australia, Robodebt involved a faulty system of government debt collection that pushed thousands of the most vulnerable people in our country into poverty or worse.
At this crucial moment, UTS has established the Human Technology Institute. The HTI is working with leaders from civil society, government, and the private sector to build a future that applies human values to new technology. I especially want to acknowledge some key collaborators who have been with us from day one, or even day zero, as we've been building this new institute. They are Gilbert + Tobin, KPMG, Atlassian, LexisNexis, Humanitech, Transport for NSW, and Microsoft. We'll have a lot more to say about our wonderful partners at our formal launch event in October.
Today we will be talking about humanising technology and how we can ensure that human values are at the heart of how new technology is designed, used and overseen.
It is now my honour to introduce the founders of the Human Technology Institute, Ed Santow and Nick Davis. Ed Santow is Industry Professor?- Responsible Technology at UTS, where he leads our initiative on building Australia's capability on ethical AI. From 2016 to 2021, Ed was Australia's Human Rights Commissioner, where he led the Commission's new work on AI and new technology, among other areas of responsibility. Welcome, Ed.
Nicholas Davis is Industry Professor?- Emerging Technology at UTS. From 2015 to 2019, Nick was Head of Society and Innovation and a member of the Executive Committee at the World Economic Forum in Geneva, Switzerland. More than anyone else, he has developed the idea of the Fourth Industrial Revolution and how we as a world community should respond. Welcome, Nick.
And now I'm very excited to introduce our guest of honour, Dr?Alondra Nelson. Dr?Nelson leads the White House Office of Science and Technology Policy and is Deputy Assistant to President Joe Biden. As a scholar of science, technology, medicine, and social inequality, she has contributed to US national policy discussions on inequality and the social implications of new technologies, including artificial intelligence, big data, and human gene editing. Welcome, Alondra. I'm now going to hand the proceedings over to Ed.
PROF. EDWARD SANTOW: Thank you so much, Verity. Gosh, this is such an honour to have you, Dr?Alondra Nelson. I can't hide my enthusiasm and excitement.
But I'm going to start with kind of a mixture of the personal and the professional because most careers take a winding path, but very few of us end up anywhere near, let alone in, the White House. So many people I know take enormous inspiration from you and your role leading the OSTP. Can you tell us a little bit about your path to the White House?
DR ALONDRA NELSON: Yes, it's been a winding path indeed, but first let me say thank you to you and Verity and Nick for the invitation to be here. It's really a pleasure to be with you all and I have learned so much from your work, Ed, about these exact topics and so it's a real honour to be here with you all today.
So let me just say at the top, the headline here is that I never expected to be working in the White House and so I pinch myself most days as I go to the White House campus and find myself there, but it is a great privilege and an honour to be doing public service and to be doing it in this extraordinary BidenHarris Administration.
So, you know, it's not a total accident. I mean, as Verity said in her very kind introduction, as a scholar, as a researcher, most of my work has been around, you know, effectively science and technology policy, but really thinking about the sort of social implications of science and technology and, you know, more recently I have been working on a book about the White House Office of Science and Technology policy during the Obama years, so the office that I came to lead, at least temporarily, right now is an office that I knew very well as a kind of historical and sort of organisational structure and now I'm there every day working with an incredible team of about 140 people on everything from AI to quantum science to climate innovation and energy science to thinking about, you know, how we, you know, get more diverse and innovative STEM fields.
You know, it is? I am I think I'm the second woman to ever lead the office in an interim basis, certainly the only person of colour ever to lead the office, you know, so I do understand that my appointment by President Biden in this role is historic and I also bring with me I think, you know, an appreciation into the work of science and technology policy both for the ways that technology can be so net positive, very much productive and generative in people's lives and the ways that technology, science, innovation can cause harm? you know, traditionally historically and in the present for certain communities, particularly disadvantaged communities, underrepresented communities like the African American community, of which I'm a member.
So the great thing about this administration, which on day one issued an executive order on equity and on, you know, the work of government being used to drive equity in American society means that there doesn't have to be sunlight between thinking about science and technology policy and thinking about issues of equity and inclusion and democracy. There's a real understanding and an attempt to draw these things together.
So, I think it's really?the particular vision of this Administration that really made it possible for someone with my particular interests and trajectory to be a part of science and technology policy making in this really wonderful moment.
PROF. EDWARD SANTOW: I think that's a wonderful way of setting up the conversation and I think what you've done is you've highlighted some of the things that we're all really excited about when it comes to the rise of AI and other new and emerging tech, but also some of the things that we should be fearful of.
So I'm going to just lean in more to that secondary category first. For people who are new to this area, how can unfairness or even discrimination arise when artificial intelligence is being used to make decisions?
DR ALONDRA NELSON: Yeah, that's such a great question because, you know, we see?I know probably many folks here have been following or have used DALLE 2, so we see these examples of the use of data science brought into machine learning and artificial intelligence that are magical, that are entertaining and some of DALLE 2 that are even beautiful. But, you know, AI as we use it in day-to-day life and as it really impacts people's lived experience is often a lot more brittle, it's not as elegant and as beautiful as something like DALLE 2, and so we have a long way to go and part of how that manifests itself are forms of kind of, you know, discrimination.
I mean, part of how we're using AI is as a kind of robot gatekeeper to various kinds of resources and services in, you know, Australian society and in US society and, you know, it is often the case that kind of because the data that we use to train machine learning and AI is often, you know? it's historical data, that this kind of historical precedent can embed kind of past prejudice into these technologies and enable presentday discrimination. So as much as we would like to think that there's a kind of ultimate objectivity that comes with artificial intelligence that frees us from that, we're finding that it increasingly bakes it in? bakes in discrimination and bakes in, you know, discriminatory patterns from the past.
Verity referenced a few of these in her introduction, you know, that there are like hiring tools that learn from a company's prior employees. So we might think about a field like? you know, a very gender segregated field like computer science in which past successful employees in a place like the United States or a place like Australia have most certainly almost always been men and in many instances almost been men of European descent. So if you're trying to train an algorithm on that historical data about what success looks like for this particular field because you want to have "objective recruiting", that means that women computer programmers, for example, will fall out of potentially what that algorithm understands to be somebody who would be predicted to be well qualified for this particular role.
Obviously, there is issues around housing. You know, mortgage approval algorithms, you know, are used to determine creditworthiness and in the United States they use kind of home zip codes often and on the face of it, you know, a census tract or a zip code should be a kind of neutral data point that we can place into an algorithm to help us sort of know more or better, but because of the, you know, extensive generation upon generation of housing discrimination in the United States, part of what zip codes do, you know, they can be correlated with race, they can be correlated with poverty, they can be correlated with forms of historic racial segregation and ethnic segregation and they really extend decades of housing discrimination into the digital age.
So there's been, you know, other examples of this, but I think that, you know, the challenge we face?is and I think the opportunity for innovation actually is I think facing head on these challenges and understanding that the technology alone is never going to sort of solve some of the big problems and challenges that we need it to solve and that we can also think about innovation in a way that leans into equity and democracy, so that if we truly think that this technology, this technique, is innovative, there are things that should come with that and that should include being maximally beneficial and, you know, minimising harms to folks.
PROF. EDWARD SANTOW: That's an incredibly useful description. I want to just kind of zero in on a point you just made. I think what you said was technology alone won't solve all of our society's problems and part of the issue we have to wrestle with there is that technology exists as part of decision-making systems.
So very briefly I want to give a personal story about how I as a human rights lawyer first saw how artificial intelligence can threaten human rights and I saw this as a lawyer. The situation arose over 10 years ago. The state police here in New South Wales were using an algorithmic tool to create a list of young people who might be at risk of, to use their terminology, "descending into a life of crime". So, the police targeted the kids on this list. Police officers would come to their homes between midnight and 6am, officers would check on these kids at school and at work and, understandably, they hated it. The kids hated being on this police list.
Let's put to one side for a moment whether that is even an acceptable approach to policing, but I want to focus on which kids were on that police list because over time we noticed that literally all of our clients had dark skin, every single one of them. Later it emerged that 55% of the young people on that list were Indigenous, even though less than 3% of the population here is Indigenous. So that seemed a clear case of precisely the phenomenon that you've just described, where the police technology reflected and then entrenched an historical injustice.
Again, being very personal and not particularly professional, a decade on when I think about that, I don't really think about it as a lawyer, I think about it as a human. I'm sickened. These were kids. Some were as young as 11 or 12. Many had never been convicted of anything serious and yet this system, this algorithmic system, resulted in really significant intrusions in their basic rights. To some of them it would have been the defining experience of their life in the worst possible way. They were traumatised, it was terrible.
Have you come across a particular situation in the US that keeps you awake at night? Is there something like this that is your kind of origin story about these concerns?
DR ALONDRA NELSON: Yeah. You know, sadly, there's been too many and I think, you know, thank you for sharing that story and also thank you so much for the work that you did at the Australian Human Rights Commission. I'm just such an admirer of you and your work, that work and this new work as well.
You know, sadly, in the United States, particularly with regards to black and brown communities, we know that there is this history often of disproportionate, you know, negative impacts of policing and, you know, the challenge that we face in this moment is that it's carried into the sort of digital age, so you know, and there are many examples and we are trying to think about these in our policy making.
For example, in Chicago there was an algorithm used by police that reused previous arrest data and the outcome of this was that it repeatedly sort of sent police to the same neighbourhoods again and again, predominantly black and brown neighbourhoods, even when those neighbourhoods didn't have at the moment the highest crime rates. So again, this is that kind of historical into the present challenge.
You know, we also have? you know, some of the challenges are around the historical data, but others are? you know, the other issues we really face that keep me up at night, to use your phrase, are really about, you know, privacies and about consent. So, you know, the challenge that we are facing now is growing kinds of surveillance in communities that may not feel empowered to be vocal about asking questions. So we've had a few instances in the United States in which facial recognition systems have been installed at entrances of housing complexes, you know, to assist law enforcement, to monitor whether or not people when people are leaving and going.
But the sort of output of that or the outcome of that is that is a kind of continuous surveillance of certain kinds of communities, in this case in a public housing authority, so already, you know, poor underresourced communities really subject to that kind of persistent surveillance and sometimes, because people are poor, we think we don't have to ask their permission. Like, we would never think of doing that kind of automated surveillance in, you know, more welloff communities without consent. So there are consent issues that we need to think about as well.
And, you know, there's been socalled predictive policing systems that claim to identify or "predict" people who could be aggressors and in these cases, you know, we have the sort of? you know, in some instances we have the privacy challenges, in some instances we have the kind of historical precedent as a proxy for the present or the future, and in some instances we just have the kind of black box in which, you know, the inability for I think communities to be able to ask for redress or to ask questions or to ask for an explanation about how a certain system reached its conclusion.
So I think there are a few? you know, we talk about sort of AI and civil rights in the United States context or human rights internationally or democracy issues. You know, it is this kind of?Gordian Knot of lots of issues that we care about in government, including issues of consent, of surveillance, of privacy, and of equality.
PROF. EDWARD SANTOW: I think that's a great description, a Gordian Knot. So now you and your colleagues in the Biden Administration are responsible for solving these problems. Can you tell us a little bit about how the Biden Administration is approaching these problems of AI?
DR ALONDRA NELSON: Well, I would say we and the world, including you and your colleagues at the new Human Technology Institute, are going to have to wrestle with this, right? These are you know, these are big governance challenges.
So I would say a few things. I mean, I think government can do? a few things. I mean, when I came into the BidenHarris Administration, so this was late January of 2021, the prior Administration had just stood up what's called the National AI Initiative Office and so it fell to myself and my colleagues to stand that office up and to really implement the sort of framework that had been passed by Congress, which included a lot of work for government around, you know, responsible AI and really tasking departments and agencies and the Federal Government with, you know, working together to define what that is in practice and to come up with kind of discrete ways that different agencies and departments would move that forward.
So that National AI Initiative Office sits within the Office of Science and Technology Policy that I lead right now. So we've got that project, which is really trying to coordinate and understand really and map and also leverage the uses of or potential uses? current and potential uses of artificial intelligence for government.
And there's another piece of that work which is what's called a research resource taskforce, which is trying to broaden access to resources for automation. So part of the? you know, certainly in industry, part of the challenge that we face is that there's a very often homogenous group of folks who are making algorithms, who are designing automated systems, and who have access to the kind of compute and data resources that really are driving the sort of AI turn and this is an attempt to really, you know, small d kind of democratise those resources and make them available to researchers at smaller institutions, emerging institutions. And, you know, the sort of theory of the case here is that, you know, we can do a better job with all sorts of technologies if we have more people at the innovation table, people who, you know, think about some of the challenges we face around discrimination, who maybe even have experienced it firsthand, as part of the process of creating design parameters and creating visions for what technology looks like in the world. So that's part of what we're doing.
I think, you know, government can also be, you know, a bully pulpit, you know, we can show leadership by I think offering a vision of what we want technology and science to sort of do and be in the world and, you know, we've been really excited over the last few weeks because we've had some historic legislation pass in the United States, including something called the CHIPS and Science Act, and there's also been what's called the Inflation Reduction Act, which has the biggest US investments in sort of climate science and energy innovation, energy technology, you know, ever in the history of the United States. And there's been a few other pieces of legislation as well.
But taken all together, I mean, what they do so powerfully is sort of say that, you know, the BidenHarris Administration has a vision for how science and technology and innovation can be used in the world and how it can, you know, create jobs, how it can be used to support institutions that aren't typically supported at the same levels as other, you know, larger institutions, so be those, you know, small businesses versus large businesses or, you know, minority serving institutions as we call them in the United States or historically black colleges and universities and making sure they're a part of this new S&T, science and technology, kind of innovation ecosystem as it's being built out.
So part of that kind of leadership and vision piece is different from a kind of regulatory piece and, you know, I think at its best that government can offer us? often through legislation, but not exclusively? these sorts of visions for what, you know, technology might look like at its best and, you know, part of what we've been trying to do at OSTP is develop what we've been calling, or what's become called, the AI Bill of Rights. Actually we picked another name for it and in a lot of extensive consultation with the interagency in government, you know, with the American public, with folks in industry, that's become its kind of? the name that folks have called it.
And we're really trying to, you know, over the last year think about a way to design and develop the use of automated systems and ways that ensure that technologies, you know, really promote and reflect and respect kind of democratic values and that means sort of I think, you know, what's great about the Bill of Rights framework, which is one of the kind of foundational documents of the United States, is that it is these highlevel aspirations, you know, that we should expect and we can envision through our aspirations a world where systems are safe and not harmful, where, you know, algorithms aren't developed and used and deployed in ways that place the American public and other publics at risk, that algorithms are used in a way that preserve our privacy, preserve, you know, our data and, you know, are used? our data is used in kind of accordance with our wishes.
Those are hard things, but I think, you know, as we're building out these new systems? and let's be very clear, you know, DALLE 2 notwithstanding, you know, a lot of automated technologies and AI machine learning are very much in their like nascent stage. So what a tremendous opportunity to be able to work upstream to create, you know, systems, parameters, conversations and ideals for the technologies and for how people should be treated and engage with them, as opposed to, you know, waiting for as the kind of examples that you talked about, Ed, and I shared, these kind of downstream, you know, challenges and poor you know, bad outcomes for certain communities then to be our response. So, you know, I like to think that we could take this as an opportunity to really be transformative in how we think about the governance of technology.
PROF. EDWARD SANTOW: That's really informative and it provides a really interesting segue and I'm going to draw on some of the questions that are starting to come through the chat here. It's sometimes said that there's a global arms race in artificial intelligence. Each country I think naturally brings its values to bear. So you described initiatives like the US AI Bill of Rights and the desire to promote democratic values and to bake those values into the way in which, you know, AI is developed and used and regulated in the US.
As Jess Wyndham has pointed out in the questions, the United States is taking a related but different approach to the EU. What do you think is the role of countries like the US and Australian Governments in cooperating in this really competitive environment? as I say, this global arms race in AI? and if you think that there is a role, what does good cooperation look like?
DR ALONDRA NELSON: So there has to be corporate cooperation, you know, and I think let's, you know, be positive and look at this webinar as a kind of example of that, and of course we need to? you know, a lot of where the sort of big, powerful automation? we can talk about, you know, lots of different technologies, not only AI, but a lot of where the big powerful technologies will come from are from organisations that are multinational, multinational technology, you know, innovation companies and the impacts of them are global and so none of these issues really abide, you know, nation state borders, and so we really can't have our kind of cooperation, you know, be within those borders as well.
Of course there will be, you know, pieces that are very distinct to particular, you know, historical communities or particular countries. I mean, you know, a concept like the Bill of Rights is very much about, you know, harking back to the kind of founding ideals of American? US American society and so, you know, we all will have I think those particularities.
But I think in the space of AI there are a couple of really great examples in which, you know, neither the US nor Australia nor the EU are really driving everything. You know, one of these of course is the OECD, which is a coalition of nearly 40 nations, you know, that are committed to democratic principles and are trying to work through a few initiatives focused on AI.
So our office, people from our office, have been really proud to participate in the OECD work as part of its kind of network of experts and, you know, working together with other countries, other democracies, to think about how both our collaboration and in the technical design of technologies and use cases that values like fairness and transparency and safety and accountability can be, you know, agreed upon and deployed. So we've been really pleased to sort of be involved in that work and including the sort of standing up of this new framework that was launched a couple of months ago at the International Conference on AI in Work, Innovation, Productivity and Skills, which is this kind of risk system framework.
So, you know, I think it provides us opportunities to? these kinds of collaborations see what's working in other countries or not working, think about use cases and think about ways to collaborate where we can even while, you know, having to abide by one's particular national sort of laws, policies and politics.
I think, you know, another example of the collaboration in the AI space in particular is of course the Global Partnership on AI, which Australia is part of, and, you know, these are like crosssector? this is a, you know, multinational but a multistakeholder kind of initiative that is, you know, philanthropy and academia and industry trying to think about applied activities and research with regards to kind of AI priorities that really builds out of this kind of larger OECD space.
So I think, you know, those kinds of? you know, I think for people here who are kind of international lawyers or, you know, political scientists or the like, I mean, there's a lot of shifting happening right now in our multinational organisations, but it's also good to see, you know, even as things like the UN, you know, continue to try to innovate that there are these new kind of multinational formations that are also at the same time I think helping us to think through things.
And then I would just offer one collaboration that we're doing with the UK because it's open and people can apply for it is a grand challenge on what we're calling democracy affirming technologies. So last fall, last winter, President Biden had a Summit for Democracy and part of that was standing up this challenge and so right now there is? I think the prize is up to a million dollars for, you know, innovation around developing safe and effective and equitable systems that really preserve privacy and the use of technology, so this can be everything from differential privacy to other kinds of technical or even theoretical, you know, systems that might work.
So I think, you know, there's a lot of talk more and more these days about, you know, science and technology diplomacy and, you know, I think it's a pretty important tool for collaboration and that a lot of the things that we want to do with innovation can be competitive, but also some of the big problems that we want to use technology and innovation to solve are global problems? climate change, for example, you know, really having clean and green energy? and that will require healthy competition, but also quite a lot of cooperation.
PROF. EDWARD SANTOW: That's terrific. So when we talk about cooperation, we've been focusing for a moment at that high level cooperation, but I want to circle back to something you talked about before about people often being excluded from the room, so to speak. In a moment you'll have a sneak peek from Nick Davis, the cofounder of our new Human Technology Institute, and as Verity said, the institute applies a sociotechnical approach and at its core what we mean by that is bringing together technical experts, people responsible for decision making systems that use AI and the communities affected by the use of AI and looking really carefully at those groups and making sure that there's no demographic groups that are being excluded. Is that the kind of approach that you would support and have you seen it being done well?
DR ALONDRA NELSON: Yeah, absolutely. I think that's brilliant and I think it's the right approach, so I'm really you can tell by my smile, I'm really excited to hear that that's the approach that you all are taking and look forward to following and staying in conversation with you.
You know, I think the sociotechnical approach for dynamic evolving systems is how we have to think about policy making, about governance, about sort of ethical frameworks for how we think about new and emerging technologies. I've been really encouraged? so the US Department of Commerce, and it has this body probably known to many of you here called the National Institute for Standards and Technology, or NIST, and NIST goes back to the 19th century, I think at least, in the United States and it was, you know, the sort of measures like how many bags of grain equals a pound. It was this organisation that like created the kind of measurements and standards that allowed us to have commerce and allowed us to agree, you know, as certainly a nation and as a world about weights and measures and allowed trade and all sorts of other things that we take for granted in modern society to take place. So this is, you know, a historic organisation that has sort of dealt with kind of, you know, I think almost binary kind of one to one, like we measure something and we create a standard around it.
So AI has been this like really I think wonderful challenge for NIST and they've really risen to the occasion and right now they're in the last stages of creating what they call an AI Risk Management Framework, but for them? and this was like I think really a big? this was a big leap for the organisation? they really moved into the sociotechnical framework, so, you know, that creating standards for technology is not just about the data and the algorithms, it's also about human and societal factors or how AI systems are used by people in the real world and how the development of these systems can amplify, you know, biases that are historical in the data, they're societal, that are personal, and an understanding that the sort of challenges or biases or potential problems of automated systems really require that we pay attention to specific use cases, to design parameters, but also to human and social and organisational behaviour.
But it's really hard to do. I mean, you know, it's easier to say, you know, this bag filled with grain equals this many pounds or this many kilograms than it is to sort of think about how we should create the best possible, most rigorous kind of standards for technologies that are really complex and computational and often have humans in the loop.
So I think we will find that? you know, there's a quote from John Lewis, who was a civil rights leader and also a legislator in the United States who died just fairly recently, but he would say democracy is a practice, you know, that it's not a static thing, that democracy is a practice and I think as we want sort of dynamic technologies to be democratic that we need to think about it as a process and a practice, as opposed to something that we will achieve, you know, and that we can stop.
So I think a sociotechnical approach is one that, you know, is going to be essential for kind of solving the challenges that AI might pose or be essential for allowing us to leverage the benefits that AI may offer, but also will be, you know, essential for thinking about, you know, how humans matter and how human organisations and human behaviours are always a part of the technologies we create and use.
PROF. EDWARD SANTOW: I think that's fantastic and what it also raises is what are some of the preconditions in helping people to engage with this massive technological change that is happening all around us. Again drawing on some of the questions that are coming through, one of the biggest issues is AI literacy, because you mentioned, you know, wide varieties in the US of citizens knowing either a lot or not very much about AI and there are programs, as my UTS colleague Dilek has pointed out, such as the famous Finnish Elements of AI program that are designed to kind of promote literacy in AI. Is that the kind of initiative that the US is supporting in your own country?
DR ALONDRA NELSON: Sure. I mean, I think part of what sits with OSTP more generally is kind of, you know, really trying to revitalise and strengthen and support sort of STEM learning and, you know, the STEM workforce and the STEM kind of, you know, sort of pathways more generally.
So, you know, it's certainly the case that we understand and appreciate that we need more fluency around technology, computer science, data science, you know, math.
But I will say that it is the responsibility of, you know, institutions like UTS, institutions like the US Federal Government to inspire people and to provide pathways for them to sort of become more fluent, you know, and I think that we've had processes in the past that just sort of like, you know, shook our fingers at people and said, "You should learn more math", or maths that you guys?might as you might say, as opposed to like look at the amazing things that you can do if you want to think about, you know, this particular problem or this particular approach or what do you care about in your society. You know, you care about climate change. You know, what do you need to know to really be able to think about that and to really address the climate crisis and to sort of bring the kind of fluency, you know, fluency conversation really into the kinds of lives that people want to lead, the things that they want to do, and the like. So I think there's a more expansive thing there.
That said, you know, in democracies everyone should be able to say? have something to say about even sophisticated technologies like artificial intelligence and machine learning and, you know, I think that the challenge that we face is that we don't want to create democratic societies in which there has to be, you know, like a literacy tax to be able to participate in the selfgovernance of your community and it really is incumbent upon industry and upon government and upon the education sector to give people I think tools and sort of levers and anchors in which they can understand at a high level the technology without having to be a data scientist.
You know, I think the use cases are important because it's where kind of the rubber meets the road and you are dealing with how they impact people's real lives and their access to like resources and services, as I said before, but they're also important because they say to people, you know, who in some instances may not even know this is how automation is operating in your life, and I think one of the central problems of governance and of science and technology policy making today is keeping democracy thriving at a time when there are complicated systems and technologies that only experts understand at a very sophisticated level, but we all need to understand at a high level for the health of, you know, democratic societies.
So, you know, I think I'll end where we sort of began. I mean, those who experience the worst effects of algorithmic bias and discrimination are often, you know, Indigenous communities, black and brown people, folks with low income, LGBTQ communities and these are groups that are not necessarily engaged in the design of automated systems at the earliest stages but, you know, as a public official and as somebody who works in government, these are? you know, these folks and all sorts of other folks in American society, you know, their diverse concerns and experiences should inform the design of these systems and should inform the governance of these systems. There should be a participatory democracy around technology, technology assessment.
We have to build that. So I'm not saying that, you know, as if it's an easy thing to do or to suggest that it already exists, but I think we need to get to a place where you don't have to be an expert to participate in democracy or have an opinion about complex technologies. Otherwise we face, you know, a future for American society, for Australian society in which literally a smaller and smaller number of people get to make, you know, like impactful decisions about our world and that's, you know, not the world that we want to live in.
PROF. EDWARD SANTOW: Absolutely not. So maybe that's the perfect time to go from the world that we don't want to live in to the world that we're trying to build. So I'm going to hand over to my colleague Professor Nick Davis.
Verity has already introduced him, so I'm not going to reintroduce, I'm just going to say something personal. Nick has been based overseas for the better part of the last two decades. We're incredibly lucky to have someone of his calibre move back to Australia and to be leading some of this important conversation. I mean, perhaps more than most people around the world, Nick has been right at the forefront of shaping this global conversation about what AI and other emerging technology will mean for us as humans. So I'm delighted to hand over to Nick as one of the cofounders of the institute.
PROF. NICHOLAS DAVIS: Thank you very much, Ed, and thank you so much, Dr?Nelson, for joining us and for those not just the insight, but the call to action about us pursuing both a process and a practice and one that's incredibly important not just on our own behalf but on behalf of so many others who aren't privileged enough to jump on a Zoom webinar in the evening in DC, as I know you are, Alondra? thank you for that? and the morning here in Australia as most of you dialling in are.
I just also want to recognise and mention the fact that I'm joining you from Canberra, where I'm speaking from Ngunnawal land, and I acknowledge the Indigenous leaders past, present and emerging here. And I really just want to, in addition to thanking Alondra and Ed and the team and all of you for joining us, say just a few words about the Human Technology Institute here at UTS in the hope that we can work with many of you in the future on exactly the topics and the ideas that Alondra has brought up in conversation with Ed.
I might kind of frame this by saying that this event is really a sneak peek into our philosophy and our work and is a bit of a foreshadowing of what you can expect from our projects and partnerships. But we will have a big formal launch in October with all of our fantastic partners and I'm super excited to share with you then in much more detail our work on topics such as model law on facial recognition technology, on AI corporate governance, as well as introduce you to many of the people that are doing the work that you don't see here on the screen today, our amazing team behind the scenes.
But until then, I think it's worthwhile emphasising that the Human Technology Institute as we see it is really dedicated to building a future that applies human values to new technology and in that sense, this is an institute that exists for public benefit to bring the best in academia, industry, government and civil society to demonstrate really practical ways that those values and ideas that Alondra has outlined for us today can and should be really practically embedded into all the different ways that we shape technology, in particular so that we can be individually and collectively more keenly aware of and in control of how technology shapes us.
I think it's really important in webinars like this to pause for a minute and appreciate how important technology is to us as a species, and this is a kind of zoom out to the general and then really zoom in to the personal. But when you think about this general sense, human beings are technological beings. When archeologists look for evidence about whether or not humans were present somewhere or how we lived, they look for tools, they look for tool use and evidence of those artifacts, because the way we use tools shapes how we live.
And in fact this idea that we as humans should be clear eyed about our relationship with technology, that's a deeply personal thing for me and I think it is for many of you as well because, as Ed just mentioned, like many of you too, I had the experience of living far away from family and friends for many years during a period that actually started with sending and receiving letters by snail mail, then progressed through emailing from internet cafes through to the advent of text messaging between periods on SNAKE and ended with video calls every couple of days with parents and colleagues and friends around the world.
And amazingly, as the convenience and quality of those longdistance connections increased, the cost effectively fell to zero like magic across that period and that seamless connection to digital systems is something that I think everyone on this call particularly will take for granted, despite the knowledge in the back of our head that there's still about 45% of the world without any effective access to the internet still.
But it's not an overstatement at all to say that I owe the strength of my personal relationships to people like Ed and many of you who are dialling in and much of my professional career to the ability to leverage digital technologies in really useful ways and yet at the same time, even for someone speaking to you from a point of privilege, a white middleaged male living in Australia, a university professor, I can see really clearly and keenly feel how those same systems that allow for highquality connections at distance threaten the best parts of ourselves in our daytoday lives. Whether it's the constant struggle to disengage from technology or worrying about the content that our children and our friends are consuming in myriad ways, I think we can feel ourselves and our behaviour being carved up, parcelled out and influenced in really fundamental ways.
I'm reminded here of what AI researcher Stuart Russell says about how algorithms succeed in achieving their goals, recommend algorithms that are designed to maximise clicks and engagement, they don't simply uncover what we like and then serve that up to us. The way they succeed is to shape our preferences to make us more predictable, and my family tells me almost every day that being predictable in the way that maximises my engagement with my smartphone is a terrible way to live.
Going back to this point about being framed by incredible privilege, if you recall the stories that Ed and Alondra have told during their discussion, technology systems and algorithms are being rolled out purposefully for truly consequential decision making often on people with very little awareness of how that's occurring and with consequences that irrevocably alter people's lives for the worse.
And some of you might be thinking coming out of this webinar and as part of the discussions and reflections that you all have well, of course, all technology comes with good and bad, that's part of the deal. We've experienced this throughout history where we accept a certain number of road deaths for the convenience of being able to drive and move quickly between Canberra and Sydney, as I often do. But the challenge with that view is that it overlooks three really critical points that Alondra and Ed have touched on during today's discussion, and the first is that the technology that we're building all this on is a series now, an infrastructure of digital networks that are able to scale and affect millions if not billions of people in a short order of time. And Ed mentioned here in Australia the Robodebt scandal and that's an example of how a small group of people can make reckless or poor decisions around the way a technology works and affect literally millions of people with incredibly serious effects for thousands and tens of thousands, and I'm sure many of you know someone that was personally affected by that and was part of the class action or otherwise affected by the errors in those systems.
And in fact that was a critical premise of my work on the Fourth Industrial Revolution, the fact that once a significant portion of the world's population takes digital systems for granted, we end up with a fundamentally set of different underlying infrastructure that demand different forms of governance, which is why the work that Dr?Nelson, the OSTP and others are doing around the world is so critical.
But second, as Alondra pointed out, it's really hard to know when, how and why many of these impacts are occurring and there's a shortage of detailed stories about how communities, particularly vulnerable communities, are being impacted by emerging technologies. That's why advocates such as PIAC, the Human Rights Commission, researchers such as Virginia Eubanks with her work on automating inequality is so critical here.
And third, when we're talking about human rights, fundamental rights, as Ed has put it before, it's incredibly dangerous to view the serious and negative impacts of technology as being part of some kind of balanced investment portfolio because the bad bits of emerging technologies are always experienced by people and communities with less power, so we should be doing everything we can to minimise those outcomes, whether they come from errorprone systems, from recklessness, from maliciousness or simply not taking the time to think about how people might be affected.
Which brings me back to the Human Technology Institute based at a leading University of Technology, but focused entirely on action and impact. As Ed has pointed out, a key point of leverage for us is in our sociotechnical approach, but we see focus in three areas as being really particularly consequential right now and the first is that there is a critical shortage of skills around AI and emerging technologies, but that critical shortage is not just technical, it's strategic, and so the Human Technology Institute is really working hard to support organisations, build engagements that go to what academic and author Emmanuel Mesthene wrote, which is that dealing with the challenges and opportunities of technologies requires us to undertake the hard work of becoming wise. I often term this as the minimum viable understanding that each of us needs to work with these new technologies safely and productively, whether you're on your own on behalf of your family or working in a large organisation.
And second, there's a huge opportunity in policy. This is the decade of tech regulation where jurisdictions around the world are updating rules around privacy, artificial intelligence, surveillance. Quantum computing is on the horizon as something that really demands serious thinking now and, as Alondra has said, this is really hard to do. It takes deep reflection, it takes technical standards, it takes international cooperation, but probably most importantly, it requires much broader stakeholder engagement and sensitivity to how these technologies are really being used. And I'm really proud that our partners and collaborators here at the Human Technology Institute involve so many great thinkers working around what it means to design technology policy well and the response that we've received already through our projects has been so gratifyingly productive and we're really grateful for our work with those partners and projects.
And third and finally, as someone who has led innovation and technology efforts across a number of organisations, yes, we need skills, yes, we need policies, but we need tools to make all of this real inside our organisations. So if you're an engineer or a developer, you're probably crying out for more and better tools to assess algorithmic bias, to help you and your development team understand and anticipate the potential misuse or inadvertent use of your products. But we also need those tools at the organisational level, governance frameworks, reporting frameworks and ways of collaborating that make all this possible.
So the Human Technology Institute, which is obviously not just Ed and me and Verity but so many more that we're excited to tell you more about in October, we're really here to help pioneer and work with you to develop and use emerging technologies to produce systems that are more fair, more accurate, more fit for purpose and more accountable.
And as we finish this webinar and you transition to your next meeting, it's not lost on us the fatigue and cost of being intense in our use of technology, not to mention the implications that Alondra and Ed have talked about. So we really hope that we can support you in your journey as well as humans, as our friends, and please don't hesitate to reach out to us to engage more deeply.
Maybe with that, Verity, will you take us home?
PROF. VERITY FIRTH: Well, I think I agree with everyone that that was a fantastic webinar. Thank you so much to Dr?Alondra Nelson. It was just fascinating to hear her insights and experience and I think a wonderful soft launch of the Human Technology Institute.
So we'll be launching properly in October and I'm sure the invites will go out and hopefully see you all there, but thank you again for participating today. This is being recorded, so there will be the link shared with those who registered, so you can share it even more further with your friends, families and networks. Thanks again.
If you are interested in hearing about future events, please contact events.socialjustice@uts.edu.au.
Find out more about the Human Technology Institute.
Democracy is a practice. And we want dynamic technologies to be democratic, so we need to think about it as a process and practice, as opposed to something that we will achieve. – Dr Alondra Nelson
Those who experience the worst effects of algorithmic bias and discrimination are not necessarily engaged in the design of automated systems. These folks and their diverse concerns and experiences should inform the design and governance of these systems. There should be a participatory democracy around technology assessment. – Dr Alondra Nelson
Speakers
Dr Alondra Nelson leads the White House Office of Science and Technology Policy and is a Deputy Assistant to President Joe Biden. As a scholar of science, technology, medicine, and social inequality, Alondra has contributed to national policy discussions on inequality and the social implications of new technologies, including artificial intelligence, big data, and human gene-editing.
Prof Edward Santow is Industry Professor – Responsible Technology at the University of Technology Sydney and Co-Director of the Human Technology Institute. Ed leads UTS's new initiative on building Australia's capability on ethical artificial intelligence. From 2016-2021, Ed was Australia's Human Rights Commissioner, where he led the Commission's work on artificial intelligence and new technology, among other areas of responsibility. His areas of expertise include human rights, technology and regulation, public law, and discrimination law.
Prof Nicholas Davis is Industry Professor – Emerging Technology at the University of Technology Sydney (UTS) and Co-Director of the Human Technology Institute. From 2015-2019, Nick was Head of Society and Innovation and a member of the Executive Committee at the World Economic Forum in Geneva, Switzerland, responsible for developing the theme of the Fourth Industrial Revolution and overseeing the development of cooperative emerging technology policy efforts around the world.