How AI is changing the legal sector
The growing use of Artificial Intelligence (AI) in today’s world has had a significant impact on the legal sector. It’s not only changing the way lawyers work but also creating a demand for new laws to be made. Here, we'll explore three key ways AI is impacting the field of law.
1. AI is changing the way our laws are broken
As more and more workplace decisions rely heavily on AI, it's imperative to consider the legal implications. Professor Ed Santow, a Human Rights Lawyer and one of the founders of the Human Technology Institute at the University of Technology Sydney (UTS), explains that one of the biggest risks with AI is its opaque decision-making process.
“When a machine makes a decision, sometimes no one is entirely sure of how it arrived at the result. In these cases, we need to ensure that any decision-makers in the process are truly accountable. If we can't understand the rationale behind a decision, we question its fairness, accuracy, and compliance with the law,” he said.
Individuals are also increasingly using generative AI applications in their work or personal lives – including when they are making important decisions or handling sensitive client information. It's important to consider that inputting this kind of data into generative AI systems could unintentionally lead to breaching legal or ethical obligations.
To help protect against this, Professor Santow emphasises the importance of creating responsible AI technologies, that integrate human rights protections into their design, development, implementation, and oversight.
2. Impact of AI on Intellectual Property (IP) laws
In today's AI-driven world, Intellectual Property (IP) laws are more crucial than ever. When we safeguard IP, we help nurture new ideas to flourish, and innovation and creativity to thrive. Protecting IP gives creators the power to control and benefit from their hard work.
Due to AI’s content creating abilities, it throws open many questions around IP law and governance. For example, who owns the content AI generates? IP laws were established for human creators, so it's not clear who should get the rights to things like AI-generated art or music. Also, AI's capacity to replicate existing works makes it difficult to know if something is truly a new concept – or if it’s infringing on copyright laws.
Another concern is its potential to use restricted data, leading to uncertainties about liability. These are just some of the challenges and are great examples of why we need updated IP laws that address the nuances of AI.
3. AI and the law landscape in Australia
Australia is behind the European Union (EU), and countries such as USA, Canada, and China in terms of introducing and updating laws for AI.
Australia currently lacks laws specifically for AI. Professor Santow explains our laws are "technology neutral" meaning they apply to all types of technology – whether it's basic software programs or advanced AI systems with deep neural networks. For example, when making decisions on approving home loans, laws that protect against discrimination are in place whether a bank uses traditional methods or complex AI algorithms.
He further explains how our technology neutral laws currently only address about 80% of the rise in AI. As a result, new laws will be necessary to address any emerging areas such as self-driving cars or sophisticated AI technologies that are able to think on behalf of humans.
Recognising this need, the Australian government has begun the process of updating laws to accommodate new AI technologies.
The federal government has also committed to modernising our privacy laws and aligning it with leading countries around the world. This means stricter rules on when personal information can be shared and how long companies or governments can keep that data after an agreed transaction.
Keeping up with AI
As AI rapidly evolves, our legal sector must keep up. That way we get to enjoy the benefits of AI and other technologies in a safe way. Together, they can offer innovative solutions to business and social challenges and help contribute to a better society.
Want to learn more about AI and the law? Check out our Curiosities series on YouTube
00:00:00:01 - 00:00:24:04
Hello curious people. I'm Professor Ed Santow. I'm a human rights lawyer and a human. I'm here to help us make sense of the rise of thinking machines. This is AI Curious.
00:00:24:06 - 00:00:52:03
I'm going to turn now to the questions from social media, from our UTS community. What is the current AI law landscape in Australia? This is a really interesting question because Australia doesn't have many or really any dedicated AI laws. Instead, we have what are known as technology neutral laws, and that means that the law applies to all technologies and no technologies.
00:00:52:04 - 00:01:16:21
So for example, if you are applying for a home loan and the bank uses the most old fashioned abacus and paper to make that decision, or it uses the most sophisticated form of AI and a deep neural net. Regardless, the law continues to apply. So the bank cannot discriminate against you, for example, on the basis of your age or your race or your gender.
00:01:16:23 - 00:01:51:00
And so that's really important to make sense of how the law applies right now. When will Australia put in place laws around AI and what will they look like? So that's very interesting because right now both the federal government and the state and territory governments are embarking on a much needed reform process. They are looking at the areas of law where technology neutral law just isn't enough, where, for example, self-driving cars might pose new problems or new challenges that the law has to address.
00:01:51:02 - 00:02:12:21
And so those are the sorts of areas, the truly novel areas where we can see the law making major changes. The next question asks us about the rest of the world. So how will Australia's AI laws compare to other countries in the world? That's interesting because Australia's been a little bit behind the eight ball when it comes to reform in this area.
00:02:13:02 - 00:02:40:15
So the European Union and the United States, Canada, even countries in our region like China have been perhaps a little more advanced in thinking about how they need to change their laws. And basically, there are two approaches that are starting to take shape here. You have, places like the European Union that are saying, you know what, we need to regulate AI as technology.
00:02:40:17 - 00:03:12:04
That means making laws specifically for AI. And then you have other parts of the world, places like Canada, that are a bit more like Australia. And they're saying things like well, on the whole, we have these technology neutral laws. So what we really need to do is regulate the outcome. In other words, make sure that whatever technology you use, whatever form of AI you use, you're treating people fairly, you're making accurate decisions, and you're able to ensure that there is accountability if something goes wrong.
00:03:12:08 - 00:03:36:19
How will the new AI laws affect the everyday person? As an everyday person there are two ways in which AI can affect you. The first is probably the most common one, which is that you might be subject to a government decision or a decision by a company that is made using AI. Sometimes that decision is made entirely by a machine.
00:03:36:19 - 00:04:02:01
It's entirely automatic. Sometimes there's a human decision maker, but she or he has relied on artificial intelligence to make that decision. In that situation, there are a couple of things that you really want to be careful about. The first is that the decision was made in a way that is truly accountable. One of the biggest risks with AI is what we call black box decision making, which sounds a little bit scary.
00:04:02:01 - 00:04:23:23
It basically means that the machine is making a decision in a very opaque way, and no one knows how it came up with that. And that's a problem for us as individuals, because we never know whether the decision was fair or reasonable or accurate, or even whether it complied with the law. The second situation is where you yourself are using AI.
00:04:24:00 - 00:04:55:10
We've all probably been toying around with generative AI applications over the last few months, or maybe even years, and sometimes we're using that in our work or in our day to day lives that sometimes to make significant decisions. And there you've got to be really careful, because staying on the right side of the law. So, for example, if you're working in, you know, a company or a government agency, you can't put in sensitive client information into generative AI because you might breach your legal or ethical obligations.
00:04:55:12 - 00:05:14:15
So this is a really great question. I really like this one. Why do we need responsible AI? I love this question because there are a lot of things to be fearful about when it comes to AI. A lot of reasons why we might want to be a little bit slow to hasten, but there's also a lot to be excited about.
00:05:14:15 - 00:05:43:10
And I guess what I many people are saying to me is that they want to do the right thing when they use, artificial intelligence or even when they're developing AI and they want to know how to do that. And so that's where responsible AI comes in. It's where you bake in human rights protections to make sure that the way in which you design, develop, implement and then oversee systems that rely on artificial intelligence,
00:05:43:12 - 00:06:04:15
you do so in accordance with people's basic human rights. And that's a discipline in itself. So it's something we're all learning quite a lot about. How you baking those human rights protections. But there's some great work being done here at UTS and also in government, such as by the National AI Centre, to give people guidance about how to do responsible AI.
00:06:04:20 - 00:06:26:07
The next question is, is it possible to create trustworthy AI and how do we do it? So there's some amazing research, which is actually a bit dispiriting, that says that, Australians in particular have some of the lowest levels of trust when it comes to the use of artificial intelligence. And so you can respond to that problem in a couple of ways.
00:06:26:08 - 00:06:47:07
You can just try and market your way out of that problem. You can have a big advertising blitz that tells people, you know, AI is completely safe. It never makes mistakes. There's nothing that's ever bad will happen to you. The danger with that is that it's not true. And people will see through the fact that that is a little bit overly optimistic.
00:06:47:09 - 00:07:13:04
And so I would say a better way of building trust is to build trustworthiness. And that's why I love the question. So something that is trustworthy is not just something where you are kind of tricking someone into believing that it's okay, but instead you're building firm foundations of trust. And so that means making sure that an AI system is safe, that it's transparent, that it's accountable, and that it's fair.
00:07:13:10 - 00:07:42:07
So we've seen over and over again how some AI systems can treat some people unfairly based on things that they can't control, like their race or their skin colour or their age or their gender. What do I need to know before using generative AI tools like ChatGPT at home and in the workplace? I'm really glad that someone asked this question, because if we'd been having this conversation less than two years ago, ChatGPT wasn't a thing.
00:07:42:08 - 00:08:07:15
Even the term generative AI was not well understood outside of the laboratory. But now people can play around with these tools. We've got students here who are doing the most amazing things, using ChatGPT to solve major problems, to communicate more effectively, especially with communities who often, find it very difficult to communicate with the wider world.
00:08:07:17 - 00:08:38:20
And so when we use these tools, we need to understand a couple of things. The first is these tools are not magic. They're very impressive. They can sometimes produce truly outstanding results, but they also sometimes produce mistakes. And I like to say that we're at the Wright brothers stage of aviation when it comes to artificial intelligence. It's really impressive today, but that's because we've got no ability to look into the future and see how good it will become.
00:08:38:22 - 00:09:10:01
It's going to get much better. And so that means that we should be a bit realistic about what we can do at these early stages. The other piece of advice I always give people about generative AI, like ChatGPT, is to make sure that you understand that you're part of a network whenever you're using those systems. So when you put information into the chat function of ChatGPT, you're handing that information over to the organisations that run that ChatGPT.
00:09:10:03 - 00:09:47:12
And so it's really important that you don't put super sensitive information in there that you wouldn't want to tell somebody else. What are the risks when it comes to AI and my privacy, especially with your personal information? Now, that's a great question because there's a cliche now that, personal information is the new oil. And what that means is, you know, in previous industrial revolutions, we relied on oil and coal and natural gas to really power, that major societal change that we saw in earlier industrial revolutions.
00:09:47:18 - 00:10:19:09
Now, that key thing that is driving the current fourth industrial revolution forward isn't oil, isn't some specific product. It's our personal information. And that's quite a profound idea. So what we need to do is make sure that when we are, using our personal information or using somebody else's personal information, when it comes to AI, we do so with proper respect for the fact that they may not wish to have their entire lives on display.
00:10:19:11 - 00:10:51:15
And we also have really important laws, like the Australian Privacy Act, that says very clearly that we have to respect others autonomy when it comes to the collection and sharing of their personal information. One person has asked, how can I use AI technologies safely and protect my personal information? How can I protect myself and my family? And that's important because when we use AI, we often think about the end product that we're engaging with.
00:10:51:18 - 00:11:17:19
We don't think about what we are feeding it. What we find when you use particularly sophisticated forms of generative AI is that they work best when you can tell the machine quite a lot about you. The more it knows about you, the better service it can provide to you. But that actually provides a bit of a, I don't know, a bit of a compromising situation because you have to ask yourself the question.
00:11:18:00 - 00:11:41:23
To what extent do we want the organisations, sometimes from the private sector, sometimes from government, knowing all of this personal information about me? And so I think what you need to do is set your own personal balance. What is the level of personal information you feel comfortable sharing with a corporation or with the government? And to what extent do you want to hold that personal information back?
00:11:42:00 - 00:12:09:13
And so there are some real steps you can take to make sure that you're not oversharing. We all talk about that on social media about, you know, information, particularly photos and videos that perhaps were really fun in the moment. But you think forward in time, would I be happy for, I don't know, maybe my lecturer or maybe, an employer in the future to see that about me, maybe reach a different answer.
00:12:09:19 - 00:12:30:20
And so that's why it's really important when you are using some of these technologies to think really carefully. Do I want to have that information out there in the real world, or would I rather have that information more contained and protected? What do I need to know about privacy laws so that I can use AI legally and responsibly at work?
00:12:30:23 - 00:13:03:08
And that's interesting because our privacy laws are in a bit of flux at the moment. So the federal government has committed to making some pretty major changes to modernise our privacy legislation. It's really important to note it's kind of embarrassing, really, to say this, but our privacy law was largely drafted before even the internet was a thing. And so while we've had little updates over the last 30 years or so, on the whole, our privacy law is pretty outdated.
00:13:03:10 - 00:13:33:00
And so what the federal government is doing is it's engaging in this process of reform that's really trying to bring Australian privacy law in line with some of the other leading countries and economies in, in the rest of the world. And that involves making sure that we have stricter protections about when you can hand over your personal information, but also when a company or government might be able to hold on to your personal information beyond this very specific transaction that you've engaged in.
00:13:33:00 - 00:14:04:24
So, for example, if a company says we need your personal information so that we can provide you with this really important insurance product, they may need that information just for literally a moment and then that moment has passed. They can chuck out that information. So in Europe, for example, we have this right to be forgotten. In other words, these quite strict rules to make clear that, once you've used someone's personal information, you've exhausted the reason why you should be holding it in the first place.
00:14:05:04 - 00:14:25:11
You need to chuck it out, and you can no longer rely on it, to make decisions about that individual or more broadly. and so when it comes to thinking about Australian privacy law, it's really important to think, okay, these are some protections that are already in place but at the moment those protections are a little bit like a swiss cheese.
00:14:25:11 - 00:14:50:16
There are a lot of holes or a lot of gaps. And so until the law is modernised, then it's especially important to be vigilant about what personal information you put out there. I read today that the Securities and Exchange Commission suggests that I could cause a nearly unavoidable financial crisis. How can regulators get the best settings on AI so that this risk is effectively managed?
00:14:50:17 - 00:15:11:17
I love this question because it recognises something quite important about our legal system and about every legal system, and that is you can dream up, people like me in fact, can dream up the best laws we can spend hours and honestly, we do spend hours and hours and hours debating every single word and comma on the page to get the law
00:15:11:17 - 00:15:51:16
absolutely, utterly right. But unless you have really good regulators, unless also your courts are very effective at applying and enforcing those laws, those laws are only words on a page. And so what we need to see as time goes on is our regulators become better at enforcing our existing laws, those technology neutral laws that apply to all technologies and none and also better at applying those new laws that we know are coming towards us, like advanced privacy protections, new laws dealing with copyright and so on.
00:15:51:18 - 00:15:57:00
Well, that was all of today's questions till next time, stay curious.