Recording: Uses & misuses of facial recognition technology
New technology can improve our lives – but there are also profound risks and threats.
This phenomenon is exemplified by the rise of facial recognition technology. For example, using this tech to unlock your smartphone is relatively low risk, but its use in policing could cause significant harm to marginalised groups.
Research has shown this technology tends to be far less accurate in identifying people with dark skin, women and people with a physical disability.
The risk of overuse is also significant, because this can result in our sliding into a society with the infrastructure for mass surveillance—a profound challenge to our right to privacy.
In this session, Aaina Agarwal, Dr Neils Wouters, Amanda Robinson and Duncan Anderson joined Edward Santow to discuss whether the potential benefits of facial recognition technology could outweigh the risks.
EDWARD SANTOW: Welcome to those of us who are joining the webinar. We're going to be starting in about 30 seconds. Hello, everyone. Thank you for joining us for today's event. All of us beaming into this webinar from Australia are doing so from First Nations land. I acknowledge the Gadigal people of the Eora Nation upon whose ancestral lands the UTS City campus now stands. I pay respect to the Elders past, present and emerging, acknowledging them as the first nation owners and their ongoing connection to this land, waterways and culture. I particularly want to acknowledge the Gadigal people as the traditional custodians of knowledge of the land on which UTS stands.
My name is Ed Santow and I am the Industry Professor ‑ Responsible Technology at UTS where I'm leading an initiative to support Australian business and government to be leaders in responsible innovation by developing and using artificial intelligence that is powerful, effective and fair. It's my great pleasure to be joined today by a distinguished group of speakers, Aaina Agarwal, Dr Niels Wouters, Amanda Robinson and Duncan Anderson. In a moment I'm be introducing each of them more fully but I'm going to start with a bit of housekeeping.
First, today's event is being live captioned so to view the captions, you can click on the CC or closed caption button at the bottom of your screen in the Zoom control panel. We're also posting a link in the chat now which will open captions in a separate browser window if that is what you would prefer. Secondly, if you have any questions during today's event, please type them into the Q&A box which you can also find in the Zoom control panel. If you like a question that someone else has asked, that will push the question up the priority list. Please do try to keep questions relevant to the topics we're discussing today.
So I want to give a bit of background to this webinar. In my previous role as Australia's Human Rights Commissioner, I led the Human Rights and Technology project, which explored the human rights and broader social implications of artificial intelligence, or AI. We said really clearly that new technology can improve our lives. We've seen AI enable extraordinary progress in important and diverse areas from health care to service delivery but there are also profound risks and threats. That phenomenon, that idea that new technology is double‑edged, bringing opportunities and risks, is exemplified perhaps most by the rise of facial recognition technology. Many of us now take this tech for granted because we use it to unlock our smartphones and other devices, something that carries some risk but relatively low risk.
The uses of facial recognition, however, are limited only by our imagination and there are some more risky areas of facial recognition; for example, when we use that tech by the police to identify someone suspected of committing a crime. When I was Human Rights Commissioner, I had two particular concerns about this technology. The first is misuse and the second is overuse. So research has shown how facial recognition tends to be far less accurate in identifying people with dark skin, in identifying women and people with a physical disability. If you apply that to a high‑risk context, then that problem can become very serious, even catastrophic.
To return to the example I gave before, if the police wrongly identify someone as a criminal suspect, that can lead to very significant violations of human rights. But even if facial recognition never made errors, the risk of overuse is also really significant because that can result in our sliding into a society that permits mass surveillance, profound challenge to our right to privacy.
So this is the big question I think for us: how can we encourage positive innovation that benefits our community while guarding against the risks? I am leading a research project at UTS to outline a model law on facial recognition. Our aim is to achieve that balance, to set red lines where facial recognition technology should be prohibited or subject to very strict safeguards. But also to encourage positive, safe innovation in other areas because the law should do both of those things.
So to begin our discussion today, Dr Niels Wouters will give a short demonstration of his creation, Biometric Mirror, which some of you may have already had a chance to play with it. You will know, if you have had a chance to look at it already, that Biometric Mirror aims to provoke debate about the legal and ethical implications of facial recognition more specifically and perhaps AI more broadly. The Biometric Mirror application works by taking a photo of your face for psychometric analysis and then it gives you that analysis of the person, including characteristics such as weirdness and emotional instability. We think this is a provocative but hopefully a really useful introduction to some of the issues we will be discussing today.
So before I hand over to Niels, some very brief background. Niels Wouters is a world‑renowned designer, researcher, innovator and co‑creator of Biometric Mirror. Niels is a Senior Design Researcher At Paper Giant and a sought after expert on the societal risks and opportunities of emerging technologies. So over to you, Niels.
DR NIELS WOUTERS: Excellent. Thank you so much for that introduction, Ed, and I can only echo your point that you introduced so eloquently. I think as a technologist myself, as someone who is really interested in human computer interaction but also as a trained architect, I really think that innovation can only be responsible and can only be positive if at some point we include the public in those conversations. When I started looking into facial recognition technologies a couple of years ago, there was an enormous debate emerging in the Western world where some academics took it upon themselves to develop fairly controversial facial recognition systems and models and I really identified that as an opportunity to bring the discussion around the challenges and the opportunities into the public realm.
So really what we did was set out and develop our own controversial facial recognition system that we conveniently called Biometric Mirror. Now, I am going to share my screen with you and if I am not mistaken, you might have all received a link to try out the Biometric Mirror analysis yourself. If you haven't, don't be concerned. I will share the link at the end of my session as well. Again, as Ed points out, first of all this is research; second, any assumption and any analysis or conclusion that Biometric Mirror presents you with, please, please take that with a significant or a very largely sized grain of salt.
We know very well what the data set is that feeds into our system. We also understand and acknowledge that our data set is inherently flawed in so many ways. What does Biometric Mirror do, as this web page says? It is a tool that can be used to assess your personality by simply looking at your face. Look, many of us ‑ we walk down city streets or hopefully at least lately we walk down city streets and we like to look at other people's faces and very often we make immediate assumptions about who these people are. So this in itself is not a new thing. We are just automating that process. I assume that everyone one of us reads the consent forms that we agree to and that we sign.
Once I click Agree, you will get a second view into my home office and you will see that my face is already identified. Once I'm happy with a certain posture and I'll do a bit of a smile before I press the button at the bottom of the screen, but once you're in position, once you're happy with how you appear on the screen, you can take a photo of yourself. Definitely smiling, that's what we would all do. The Biometric Mirror then takes a couple of seconds to upload your face to our facial recognition model and you then see some of these assumptions appear straightaway. Age isn't too far off. I think it's about two, if I am not mistaken, two years off, but you also see that Biometric Mirror very quickly turns nasty and starts to analyse traits that first of all I wouldn't necessarily want to be shared with computers, with systems.
Secondly, I am also very conscious that a lot of these traits have nothing to do with my face. Aggressiveness ‑ apparently I am average aggressive. I don't even know what 'average aggressive' means. I am very humble. I did not know that my face could tell that. Unfortunately, I'm only 'average attractive', but again, as an academic, I can live with that assumption. What Biometric Mirror is really interesting for as well is that it's not just these assumptions that it makes, but it also ties them to a speculative scenario, and this is actually an interesting one in the context of the discussion we'll be having today. But I am indeed perceived to be quite aggressive and 39 years old. I am not sure how that matters. But imagine that this information is automatically fed to police forces to monitor my movements or to monitor some of my movements. Of course, that is very far from the desired and wanted scenario. But really Biometric Mirror really is a tool to have that conversation with members of the public and take a conversation that is otherwise very technical in nature or very easily influenced by legal conversations, policy conversations, take these conversations into the public realm and make sure that every single one of us, regardless of technical proficiency, can participate in that discussion. After having run this study for the last three or four years, I can tell you everybody has an opinion about this technology and everyone feels really included in the conversations that they can have with us about where this technology should take us as a society. As I said at the start, if you haven't had a chance to try it out yourself, head to biometricmirror.com/webinar. However, if you are accessing this panel from your mobile phone, I would suggest you do it after the panel has concluded. Ed, over to you.
EDWARD SANTOW: Thank you, Niels. Take a beat to try to process some of that information because it can be quite something, right? It gives us a bit of a window into potential future and lots to chew over there. Before we do that chewing, it is a great pleasure to introduce the other three members of the panel. First Aaina Agarwal is a business and human rights lawyer based in the United States. She works as Counsel at BNH.AI, the very model of a modern law firm, which joins lawyers with data scientists to advise clients on AI. She's also the producer and host of Indivisible, a podcast that explores AI, crypto and human rights. It's a must listen. And formerly she served as the Director of Policy at the Algorithmic Justice League with Joy Buolamwini and others, a leading not‑for‑profit organisation focused on the impact of facial recognition. Welcome, Aaina.
Secondly, we have Amanda Robinson, who is the Co‑Founder and Director of Humanitech at Australian Red Cross, a think + do tank which seeks to ensure that technology serves humanity by putting people in society at the centre. Before the Red Cross, Amanda held strategic roles, senior strategic roles in innovation, digital product development and marketing. Amanda's work focuses on how social innovation and frontier tech can help solve complex social problems. She's also a member of the industry advisory board for the College of Business and Law at RMIT and Chair of the Trust Alliance Pilots and Programs Working Group. Welcome, Amanda.
And last but certainly not least, we have Duncan Anderson, who is the Executive Director, Strategic Priorities and Identity at the NSW Police Force. Duncan co‑chairs the New South Wales Identity Security Council, which works across New South Wales to promote security, privacy and accessibility of identity products and services. Duncan previously held senior roles on national security and law enforcement with the Australian Federal Government, particularly PM&C, the Attorney‑General's Department and Home Affairs. While he was at Home Affairs, Duncan was responsible for the National Identity Security Strategy, including managing the National Document Verification Service and implementing the new face‑matching services. So welcome also, Duncan.
So I'm going to start with some questions. So as we saw with the demo from Niels, discussion about facial recognition tends to move pretty quickly towards some dystopian visions of the world. We're going to get there, don't worry, but before we do, I would like to invite each of you to give an example of how you see facial recognition technology being used well, either now or into the future. Aaina, as you're beaming in from the US, I might start with you to give us a quick example.
AAINA AGARWAL: Yes. So thanks for the question, Ed. I'm not quite sure that I have the positive answer that you might be looking for. So I think that as you mentioned, there is a pretty low risk with the one‑to‑one security access applications that you had mentioned, so being able to get into your phone and perhaps being able to access a building or an area that you frequent. I think the risk there from a risk perspective is pretty low ‑ debatable as to whether the convenience and potential advantages of security merit the overall use of the technology there, but I don't see too many risks.
But when it comes to positive applications in the broader social context, I am of the opinion that ‑ I can't really think of any. I think that the risks of having the infrastructure of a surveillance state in place are really too great to justify the use or the potential use even in very limited circumstances. So, for example, you might have some checks whereby police would be required to obtain a warrant and there would be thresholds for a level of criminality. So only in instances of very serious crimes, terrorism, child abduction, sex trafficking could facial recognition potentially be used. And that's all well and good and then obviously systems would have to be vetted and made sure that they're fit for purpose and there would also be checks of human review. However, that doesn't get around the fact that you still would require a very robust infrastructure of surveillance in place for that exception to even be there, and I think that the risks of that are really just too strong for me to play out how that could be justified. So I'll leave it there. I don't think that that was a positive answer that everyone was looking for there but that's where I'm at with how I feel about that.
EDWARD SANTOW: No, I think you have given us a good lead there. Maybe Duncan, then, I could pass the baton to you. What do you feel most positive about in terms of facial recognition?
DUNCAN ANDERSON: Thank you, Ed, and good afternoon, everyone. I would also like to start by acknowledging that I'm joining you from the lands of the Gadigal people of the Eora Nation. I think that some terminology is really important in this space and in particular I think what I see as the distinction between face recognition and face classification, so the demonstration that Niels provided, which is really interesting and concerning at the same time, is about technology which seems to make judgments about a person's gender or ethnicity or whatever it might be, which, as I understand it, works quite differently to technology around face recognition, which is based around seeking to determine whether two or more photos are of the same person. So I think that's an important distinction to make. Then within face recognition, there's the different use cases, but I would also say that the verification use case where people can use facial recognition to help prove their identity when accessing online services, I can see a lot of benefits in that. There's some stats ‑ I won't reel them off now ‑ but that identity crime continues to be a significant issue in Australia and elsewhere. It's not been helped by the pandemic and I think the appropriate and responsible use of face verification there can help people protect their information and their identities from compromise and still deliver privacy benefits.
EDWARD SANTOW: Thank you, Duncan. I think there's some really important distinctions there that you're drawing. I'll go to Amanda now. Amanda, is there a particular use case of face recognition that you feel most positive about?
AMANDA ROBINSON: Yes, thank you, Ed, and hello to everyone. I just acknowledge that I'm joining from the Wurundjeri lands of the people of the Kulin Nation here in Melbourne. So the International Committee of the Red Cross has been running a program called Trace the Face for a number of years now and have been using biometric systems in conjunction with refugee databases to better match our refugees with loved ones in times of conflict. So as you can imagine, not without its inherent risks and is managed very carefully around ensuring that we can still deliver that service without biometric data if it needs be, but we are seeing these technologies provide significant benefit and efficiencies in terms of reuniting people who have been separated in times of conflict more quickly, and so I think the opportunity there in the humanitarian sector to be able to deliver services to people in need quickly and more efficiently is certainly there but we do need to step into it very mindfully.
EDWARD SANTOW: Thank you, Amanda. And Niels, you're working with this technology a lot. What do you feel is a positive use case?
DR NIELS WOUTERS: Yes, I think I'm echoing largely what Aaina and Amanda have said. It is hard for me to find positive use cases, even though I'm ‑ under the work you are doing in that realm is really exciting. What is interesting is you are combining it with other technologies, so you're not just relying it on the face to make assumptions. I think that is something we always have to keep in mine. This is the technology that is fairly young. There are false positives. But I think we should also acknowledge the false negatives that often appear. I think when we start talking about crime, for instance, a false positive that might have a pretty significant impact on an individual, and so might false negatives that are not identified by a system or are not connected to a certain case that people are trying to solve. If anything, I think there are positive developments in the medical field where facial recognition ‑ in a broader sense, I think, I should say, machine learning ‑ are being used, but what is really interesting is that the medical field itself, they always have a human in the loop at some point. It will never be a computer or a machine or an algorithm making a decision. It will make an assumption but it will always be a trained professional ultimately that sees that assumption and turns that into a procedure, a medical procedure, for instance, or a treatment plan. And I think that is something that we can learn a lot from as well.
EDWARD SANTOW: Thank you, Niels. There are a couple of things that you touched on there which I think are really important, particularly about false negatives and false positives. When we were consulting the Australian community about AI and specifically about facial recognition, that was really important to them. People wanted to be safe and when they talk about safety, they particularly talk about accuracy but they also want it to be fair and accountable. Amanda, really a question for you. When we talk about safety, fairness and accountability, is that something that developers and users of this technology should do out of the goodness of their hearts or are there laws in place right now that require that?
AMANDA ROBINSON: There are already a range of legislative provisions when it comes to collection and use of biometrics and we see that through GDPR and current Australian privacy laws which include things around concern to notice, consideration of purpose around collecting of information and storage of data, but I guess the question really comes down to how enforceable these provisions are. We're also increasingly seeing calls for a ban on biometric recognition technologies from people who believe that technical and legal safeguards actually could never fully eradicate or eliminate the threat that they pose. What we do feel is that this notion of goodness of the heart doesn't tend to work out so well in the real world, even with the best of intentions. So we have experienced and we have all experienced or read about well‑intentioned technology that has gone wrong and unintended consequences that have caused harm, particularly to vulnerable groups. So the humanitarian sector operates under a principle of "do no harm", which really puts people at the centre, and we are constantly assessing whether risks are too great and whether those risks outweigh the benefits of anything that we do, and that includes technologies. And the growing concerns about implications of new and emerging technologies on society are real and we need to address those and so we again are seeing, particularly through some of the work that you led, Ed, with the Himan Rights Commission around development of ethical guidelines, and this cuts across private, public and for purpose sector, so these types of ethical frameworks which are supported with proper guidance and training can be a really valuable addition to the regulatory system.
Within the Red Cross movement globally, we have implemented different policies and processes around biometrics to help facilitate responsible use and to address data protection challenges in particular, and so we're starting to implement our own guidelines and processes to manage ourselves and I think introducing things like frameworks alongside with laws and regulations are going to be really key in helping us be able to move forward. So I think best of intentions, absolutely, but we need to do more than that to ensure that we protect people and particularly the most vulnerable.
EDWARD SANTOW: Thank you, Amanda. I am going to ask you a question in a moment, Duncan, but just a reminder for everyone listening in, I am going to ask about another 10 or 15 minutes of questions. People have already started putting questions in the Q&A, which is fantastic, so feel free to continue to do that and I'll come to those in about 15 minutes. Duncan, clearly you're at the cutting edge in this area. The police have already been noted by me and a couple of others as an area of use of facial recognition that may well be pregnant with possibility but it is also an area of concern. Do you think that there are extra obligations on a government body like the police to make sure that they're safe in how they use this sort of technology?
DUNCAN ANDERSON: Well, the short answer is yes, Ed. This is a really important discussion to have, I think, because building community confidence in the use of technology by police is certainly part of that ‑ an important part of the relationship the police have with the community. Amanda mentioned there are privacy laws which cover the use of personal information, including biometrics, but in New South Wales, the privacy law doesn't apply to police in this way, and that's because the Parliament has made the decision that given the nature of some police functions, where you're dealing with people who don't always cooperate, it's not always feasible to seek consent for the collection of information. But even though privacy law doesn't apply in the same way, there still is quite an established legal framework around police activities which applies to the use of facial recognition. It wasn't necessarily specifically designed for that but it has more general application.
So there's a Police Act which sets out the functions of the organisation which is around providing policing services to prevent and detect crime and prevent injury, et cetera. That Act also sets out some values that the Police Force has to abide by and they cover things like preserving rights and freedoms and exercising authority responsibly, and also the efficient and economical use of resources, which can come into play in this sense as well. There's other legislation. There's Law Enforcements Powers and Responsibilities Act, which sets out some procedural matters, some of which are to do with, for example, how police collect photos when people are being charged with offences. There is a Law Enforcement Conduct Commission Act, which sets out various things including provisions around what is called agency maladministration, so police need to make sure that whatever is being done isn't unreasonable or unjust or improperly discriminatory, even though they might otherwise be lawful. There's anti‑discrimination laws. There's the other laws ‑ GEPA, I can never recommend what this acronym stands for ‑ it's access to government information which covers the explainability of decisions. There's State Records Act and then there's also policy around the Government's AI and strategy and policy and the ethical principles around that, and we have been doing some wok in that space, on top of internal sort of policing policies and procedures as well.
So there is an established framework there. It applies to facial recognition and other things, and, as I said, I think it is an important discussion to have about whether that is adequate, whether it might need to be looked at in future. Also think about whether ‑ I think we can all agree that the irresponsible or inappropriate use of facial recognition, nobody wants to see that, but I suppose the question I have is: is it the nature of the technology per se that means it can't be used responsibly or is it more in the way it is being used and whether there's sufficient human involvement and oversight?
EDWARD SANTOW: That's a really crucial question. In a sense we'll come back to that in a moment when we talk in a bit more detail about police use of facial recognition. I'll come back to you, Duncan.
DUNCAN ANDERSON: I thought you might.
EDWARD SANTOW: Before we do that, I want to kind of zoom out a little bit and ask a bit more of a philosophical question of you, Aaina. Last year you wrote, I'm quoting here, "The idea of privacy is meant to provide people with a space to determine their own identities. When this space is intruded upon by algorithms that use a profile to determine what we see, it limits our cognitive autonomy to construct how we think and feel". How might biometric information specifically, which is really things like our face ‑ how might that sort of information be used to build profiles and what are the implications of this, especially in what you have seen from the United States?
AAINA AGARWAL: So that is a great question. I think that we are fortunate that we live in democracies where we aren't seeing a lot of the potential ramifications of facial recognition in the hands of government and a surveillance state play out, but I think that it's important to recognise that we don't get from zero to the CCP overnight, and that's the reference there to China's government, for those who don't know. It kind of starts with civil liberties and how they're eroded through lesser applications and through just the feeling and the sense of living in a society where there are cameras and where there are surveillance and what that does, and that comment is really meant to say when you live somewhere where you feel that you're being surveyed, where you feel that your movements are being tracked and then being able to reconstruct a sort of identity of who you are, it kind of limits how you're able to show up and express yourself because you don't know whether there's a certain pattern of movements or somebody that you're associating with or some conversations that you're having, how that might be used against you. Effectively every time you go outside and your face is captured in a public place, the state with gather points of data that can then be configured and constructed in a way that can potentially be used against you and you don't really know on what basis that might be, based on how you are associating or identifying or even not. Maybe it's something to do with your neighbours or your family members. So I think that on a philosophical level, there is just this notion of surveillance and what it does to kind of erode the ability for people to show up and express themselves in their lives.
So that's kind of on one level that I just wanted to say more generally. And then to speak about it more specifically, in China, the CCP does have hundreds of millions of cameras that are overseeing society and these cameras can distinguish and sort you instantly. In their case, in the Xinjiang province, that is being used for them to quickly sort and effectively commit genocide. So there are discriminatory implications when it is being used in the hands of a government that is vested in repressing a minority or minorities. So that's obviously at one extreme end but I think on the way towards that, you create a society of fear and when people live under a sense of fear and being watched and reprimanded, as I was mentioning before, they really aren't quite sure how they might show up in their own lives. So they just start to shut down and then what does that mean for human rights? What does that mean for a human life?
EDWARD SANTOW: Thank you, Aaina. I think that's a really fascinating tour of what it means to be surveilled. I think probably most of us have experienced the discomfort of being watched without any kind of consent but the vast majority of us, I suspect, on this webinar have not experienced the kind of worst of that form of surveillance, and I guess what you are really exploring there is how that can be incredibly chilling on a person's ability just to go through life and do the normal things, go shopping, meet with friends, all of those things that many of us are lucky to take for granted. So moving from that discussion about surveillance back to the police here in New South Wales, Duncan, I wonder if you can just answer a fact call question for us to start with, which is how is the NSW Police using facial recognition and other similar biometric technology right now?
DUNCAN ANDERSON: So the New South Wales Police, we have committed to the responsible use of facial recognition. In broad terms, it's used as an aid to human decisions about identifying people, rather than an automated decision making tool. So we are not using live facial recognition. As Aaina put it out, that is in use in China and even in countries such as the UK. They're using CCTV and facial recognition to monitor public places. That is not what we're doing in New South Wales. What we do do, though, is use facial recognition as part of investigating crimes after they have occurred ‑ what is sometimes called retrospective facial recognition. That can also be used to help identify or locate missing persons as well. So that's primarily about taking images which can come from various sources and then matching them against holdings that police would already have access to, such as photos that are collected as part of ‑ when people are charged, arrested and charged, and in some cases the images can also come from CCTV footage, for example if police investigate an armed robbery and there was CCTV footage of the location, the police would collect that and then try to identify the perpetrators using those images. We're also in the process of trialling some national face‑matching services which provide under certain conditions the ability to match against the image holdings of our government agencies to help police when they're seeking to identify people as part of investigations.
But, as I said, in all those cases, it's the machine assisting with the job of filtering and then providing assessments which are then reviewed by trained facial recognition examiners who then can make assessments about whether these two photos are in fact of the same person, and even then that information is passed to investigators. So it is a combination of automated matching and human review. There is some research done in recent years which indicates that that type of approach is actually more accurate than using either of those two methods in isolation. But even then, the combination of automated matching and human review is only used to generate leads for further investigation.
EDWARD SANTOW: I think we might have just lost Duncan there. So I'm going to jump in because I think he gave us lots to chew over there. Oh, Duncan, you're back? But you have gone on to silent there, Duncan. I'll take that as kind of a concluded statement and we'll come back to you a little bit later for some more observations. So essentially, if I can just quickly paraphrase, Duncan was talking about a number of uses of facial recognition by the State police here in New South Wales. And I guess a crucial point that he was emphasising was that it's primarily ‑ the technology is primarily used to generate leads and at that point a human comes in and will assess the strength of that lead and may say, "Oh, well, no, this person is not who the machine thinks they are" or they will say, "Oh, yes, it probably is" and then take whatever action the individual police officer sees fit. How do you feel about it when I pose this question generally? Do any of the panellists want to comment on that? If no‑one does, I'll just identify someone myself. Amanda, from a Humanitech perspective, did you want to make any observations?
AMANDA ROBINSON: Yes, thanks, Ed. I guess a couple of things. One is, as Duncan said, context is really important so the way and the specific use cases in which this technology is being used and in conjunction with human oversight is really important. And the other is this risk benefit and how we weigh up the risk of the misuse and overuse of these technologies versus the benefits that it can provide to society. But I guess more broadly, as we know, these systems come with inherent bias and the potential to discriminate is very real and so we have to be really careful that it doesn't further entrench marginalisation or harm on vulnerable people when we do use these technologies in these contexts.
EDWARD SANTOW: Maybe on that point I can bring you in. We're fast running out of time but Aaina, just maybe in a couple of sentences, drawing on the US experience, do you have any observations that you think maybe we can learn from here in Australia about police use of facial recognition along the lines to of what Duncan laid out?
AAINA AGARWAL: Yes. So I think I missed the context of your question a little bit. Can you rephrase exactly what you were asking me to comment on. I want to make sure I'm responding.
EDWARD SANTOW: Of course. So Duncan laid out how the NSW Police here in Australia are using facial recognition primarily to generate leads. This person may be a particular criminal suspect and then a human police officer will go and trace that down and see whether that person is who the machine thinks that they are. That has been trialled in the US as well. Is that something that you have concerns about? Do you have any very quick reflections to say in a couple of sentences about that sort of use of facial recognition by police?
AAINA AGARWAL: Yes. So I think that one issue is the way police are using facial recognition is widely different across the different states and jurisdictions in the US. It is a total patchwork and there's not really one reflection as to how this is being used by the police in the US full stop. So I think it's a very locally driven thing, depending on ‑ even maybe different cities really and who is in charge of the different departments and the local politics there. Then I'll just make an observation, which is that the biggest concern I think politically on the narrative here is that there are very discriminatory patterns in policing. So certain communities and people of colour have been historically just oversurveyed and overpoliced, setting aside facial recognition, and the concern is really that these technologies are going to be used to disproportionately target and survey those communities which will thereby bring more people into the system in front of the police and then potentially being questioned and being noted as suspect for crimes than otherwise would be, which, of course, then runs the very likely risk of perpetuating the cycle of discrimination and injustice with respect to the effects of policing.
EDWARD SANTOW: Thanks, Aaina. I want to give you a right of response there, Duncan, before I ask one last question of Niels and then we'll go to the Q&A. So do you feel like those lessons that have come through strongly from the US, are those things you have taken on board here in the NSW Police Force and, if so, how? We've got you on mute again there, Duncan. I think we still can't hear you. I might suggest that you log out and log back in again and we'll give you a chance to make some comments before we wrap up. I want to come to you, Niels. So we have been talking a lot about identifying people using facial recognition, but, of course, as your demo of Biometric Mirror showed, the technology can at least in theory be used to extrapolate all kinds of personal information about someone, a bit like psychometric analysis. There are certainly concerns that big companies like TikTok, even Facebook, or Meta ‑ although they have made an announcement about pulling back from this ‑ that they are essentially mining people's personal biometric data and then that can be used for any number of purposes or even sold on to others. Is this something you think we should be worried about, and if so, what can we do?
DR NIELS WOUTERS: I absolutely think we should be worried about it. I think even companies that publically announce that they are stepping away from certain technologies, I think again we should take that with a grain of salt and we should be conscious that there are many other companies that we are probably publicly not aware of that do it. It was only fair recent that Clearview AI, from my understanding, was actually caught out accessing public datasets of photos and doing all sorts of clever stuff with that. When it comes to the likes of TikTok and other social networks, I think absolutely everyone these days knows that your data is a pool of money and a pool of income for social media providers. Anything you do on a free social network turns into a dollar at some point. When it comes to those collecting enormous, massive data sets ‑ and again let's talk about TikTok. I am not on TikTok. My dance moves are far too bad to be on TikTok. But if we are really conscious about the gigabytes, probably terabytes, of data they can capture, very often involving a face, they can do really amazing and extremely powerful things with that. They have their own research branch that I assume is probably developing the next filter for your TikTok videos, but also with that comes a challenge of the next big thing in the field and these are obviously the deep fakes as well. When we started Biometric Mirror in 2018, deep fake was a fairly new thing that very few people had heard of but we see that technology, for instance, is becoming a mainstream thing and really feeding into misinformation and disinformation campaigns more and more. TikTok, young people, I think that is really the next big action point, telling our young people to be extremely cautious on those social networks, to be extremely vigilant with the amount and type of data that they share, but also notify them and tell them in any way possible that whatever the system or the mechanism or the platform feeds back to them is probably informed by something that they have shared with the platform, if that makes sense, and not take the information always for granted.
EDWARD SANTOW: Thank you, Niels. I want to turn to some of the questions coming through and we have loads of interesting questions. I'll group a few together. Crystal Williams, a question for you, Duncan, points to the potential of facial recognition to identify unconsciously deceased people with no other form of formal identification. Is that something that you see as beneficial? I'll link it to a less good example, or a bad example really. So Phil Wright presents a really compelling example here where he says: what would happen if a police officer simply waves someone's phone in front of a suspect without a warrant to unlock their phone and then get their personal information from that? Is that the kind of thing that our laws actually protect against now?
DUNCAN ANDERSON: Can you hear me this time? OK. So the first part of the question around identifying deceased people, so that is something that does occur. When I was at Home Affairs, I became aware of at least one case where a state police had been checking against some of the immigration photos that Home Affairs held and were actually successful in identifying a deceased person. So I suppose that is another benefit of the technology there. We've got a question about facial recognition being used to unlock someone's device. I am not an expert on police procedural law or anything like that but I suppose I would just say there are existing laws and procedures for that type of thing and if something that was done was not in keeping with those laws, then it's a question of whether the information that is gained from that would be any use to police in any kind of investigation. It wouldn't stand up to evidentiary standards.
EDWARD SANTOW: Steven Masters asks about the impact of having a Bill of Rights. Amanda, you're in Victoria, which has a Charter of Human Rights. What do you think the impact is of having a Human Rights Act or a charter of rights applicable to government use of facial recognition? Does it help prevent some of the misuse or is it not enough?
AMANDA ROBINSON: I think it's part of a suite of things that we need, Ed, to provide the guard rails around these technologies. So I don't know that there's any one thing in isolation that's the golden ticket to making sure that these technologies are going to be safe and fit for purpose into the future. But I think alongside legislation, regulation, ethical frameworks, training, ensuring that we're developing the next set of future leaders that are thinking about the development of technologies with humans at the core and in control, all of those things go towards ensuring that we create an environment in which people aren't or at least reduce the risk of doing harm on people, on society, and particularly those who are already experiencing vulnerability. So, yes, I think it's a positive step and part of a suite of things that we need to be thinking about to ensure these technologies are fit for purpose.
EDWARD SANTOW: Thank you, Amanda. Nerissa de Villa asks a really interesting question which I think might give me a bit of a headache. Can AI be used to oversee other AI? I feel like this is a question for Niels. In other words, can you have like an AI overseeing an application that uses AI and kind of hold it in check as it were?
DR NIELS WOUTERS: Well, talk about headaches. I am by no means an AI expert. I have dabbled in the space. I think there are a few components in that question and one is to start thinking about general AI, which we are still far away from, luckily. It also brings us in the space of the self regulating AIs. Me not being an AI expert, and perhaps someone can add more context to that consideration, but I would be very wary of that for much the same reason as we talked about earlier. We shouldn't forget that the systems are ultimately created by humans, like the five of us on this panel, and we all have certain biases and we all have certain assumptions that if we were to develop the next big algorithm, the next big thing, we would probably implicitly or explicitly embed those in how algorithms function and work. I think one interesting development, though, is explainable. AI, where explanations are presented whenever an AI such as facial recognition formulates a decision or formulates an assumption, so there is always a way for a human to backtrack how that decision has come about. I think that is something that is hugely promising. It is still, of course, an ongoing conversation around how explainable AI finds its way into policy.
EDWARD SANTOW: Thank you, Niels. Amy asks a really important question, which is: why is facial recognition applications ‑ why do they tend to be less accurate in identifying, for example, people of colour or people with physical disability? Aaina, you have worked with the Algorithmic Justice League, which is a really ground breaking research in this area. Can you tell us why that might be the case?
AAINA AGARWAL: Yes. So the easiest answer is that the data sets aren't trained on as representative of data. There are a couple of answers. So that's one of the factors, that a lot of these systems are trained disproportionately on data that represents what Joy in her research would call "pale, male faces". So they're just not as skilled at capturing the variation and nuance that presents differently in women and people of colour and then particularly, which was another ground breaking facet of that research, is considering the intersectionality. So you have lower rates of performance as to women, and then as to people of colour, but then a compounding effect for women of colour. So I think that is one of the main reasons. I think that to me is ‑ and also ‑ then you can get to the fact that ‑ why that might be and then you can get into how data has been collected and what decisions have been made and who has been making those decisions as to what's been captured and the types of people and faces that are representative of society, and those decisions have typically been biased one way. So you also have that that is a part of it. I also think there may be other concerns as to who is designing the systems and decisions that they're making and how those might privilege certain people over others.
EDWARD SANTOW: We have two minutes left so I am going to be brutal. You have one last sentence each of you. What is one change that you would want to see to improve how we use or regulate facial recognition technology? I'll start with you, Niels.
DR NIELS WOUTERS: Excellent. Look, I'm all about inclusivity and co‑design. I feel we need many more opportunities for members of the public to be involved in these conversations. I think discussions around the ethics of facial recognition, they are very often and too much led by ethicists in their ivory tower. What we really need is that close connection with the members of the public. Let them understand how the technology works, what its shortcomings and opportunities are, and use that to inform policy.
EDWARD SANTOW: Thank you, Niels. Amanda, one change?
AMANDA ROBINSON: I could just say what Niels said but absolutely vulnerable groups in civil society need to be invited into the core of this work. So the challenge is how do we meaningfully and respectfully include community and in particular vulnerable people in designing and developing these technologies.
EDWARD SANTOW: Duncan, one sentence for one change.
DUNCAN ANDERSON: There is legislation that has been introduced at the Commonwealth level a few years ago now. I think it would be great to see that passed to allow for a responsible and cautious use of facial recognition along the lines of what we have discussed today.
EDWARD SANTOW: That's the Identity‑Matching Services Bill. And, Aaina, as our guest overseas, you have the last word on this. One sentence for one change.
AAINA AGARWAL: The US highly politicises on many issues but one issue for which there is strong bipartisan support is regulation of government use of facial recognition. So I think that one change would be focussing as a priority on significantly limiting or enacting a moratorium, as it were, on government use of these technologies on the understanding that whatever else is happening, that that's really the most urgent issue and the area in which the most significant negative implications could occur.
EDWARD SANTOW: Thank you. And thank you for allowing me to be very restrictive there. We are right at time. We could go for hours longer, but I really want to thank our four panel members: Niels, Amanda, Duncan and Aaina. Thank you very much on behalf of myself, UTS, the Centre For Social Justice and Inclusion and all of the people listening in for some really interesting discussion and debate. Continue to watch this space as we do further work at UTS on facial recognition and the laws and other protections that we need in this space. So with that, I thank you all and wish you a very good rest of today.
(End of livestream)
If you are interested in hearing about future events, please contact events.socialjustice@uts.edu.au.
The risks of having the infrastructure of a surveillance state in place are simply too great to justify the use or the potential use [of FRT] even in very limited circumstances – Aaina Agarwal
Discussions around the ethics of facial recognition are very often and too much led by ethicists in their ivory tower. What we really need is that close connection with members of the public. – Dr Neils Wouters
The growing concerns about implications of new and emerging technologies on societies are real, and we need to address those. – Amanda Robinson
Building community confidence in the use of technology by police is certainly... an important part of the relationship police have with community. – Duncan Anderson
The idea that new technology is double-edged – bringing opportunities and risks – is exemplified perhaps most by the rise of facial recognition technology. – Ed Santow
Speakers
Aaina Agarwal is a business and human rights lawyer and media voice focused on the impact of disruptive technologies. She works as Counsel at BNH.AI, and is the Producer & Host of podcast Indivisible.
Dr Niels Wouters is a senior design researcher at Paper Giant. He is the co-creator of Biometric Mirror – an online tool that demonstrates facial recognition usage in psychometric analysis.
Amanda Robinson is Co-founder & Director of Humanitech at Australian Red Cross, a think + do tank, which seeks to ensure that technology serves humanity by putting people and society at the centre.
Duncan Anderson is the Executive Director, Strategic Priorities and Identity within the NSW Police Force. He co-chairs the NSW Identity Security Council which works to promote security, privacy and accessibility of identity products and services.
Edward Santow is Industry Professor – Responsible Technology at UTS. He was Australia’s Human Rights Commissioner from 2016–2021, where he led the most influential project worldwide on the human rights and social implications of AI.