Recording: Our 21st Century Brain: Turbocharging human intelligence with AI
HTI Co-Director Professor Sally Cripps spoke at the Royal Society of NSW and Learned Academies Forum on a series about opportunities to use our emerging understanding of the workings of the human brain to promote human wellbeing beyond the 21st Century.
How AI working together with human intelligence can enhance scientific discovery
On Thursday 2 November Professor Sally Cripps joined a number of other speakers at a series of lectures held by the Royal Society of NSW and Learned Academies Forum entitled Our 21st Century Brain.
Professor Cripps spoke during Session IV: Turbocharging human intelligence with artificial intelligence on how AI working together with human intelligence can enhance scientific discovery.
Other speakers included Professor Ian Opperman (Moderator), NSW Government Scientist and UTS Industry Professor; Ms Stelar Solar, Director National AI Centre, CSIRO; and Professor Lyria Bennett Moses, Associate Dean (Research), UNSW Faculty of Law and Justice and Director Allens Hub for Technology, Law and Innovation, UNSW.
These are algorithms that tell us what we don’t know to rapidly advance scientific knowledge in the smartest, least time consuming, cost-efficient way possible.
Professor Sally Cripps
Artificial intelligence is totally different to human intelligence but very complimentary. And the union of the two could lead to accelerated scientific discovery.
Professor Sally Cripps
Without further ado, I'd like to introduce our first speaker, Professor Sally Cripps, and we're going to keep to time because we want to have some questions. So please, Sally, welcome.
Professor Sally Crips, Director of Technology, Human Technology Institute and Professor of Mathematics and Statistics, University of Technology Sydney:
"It's my very great pleasure to be here. Oh, great, I was worried there that it wasn't going to come up on the screen. Thank you very much. I'm not going to be talking too much about silicon love, but I will be talking today about debunking a few myths around AI, moving into a whirlwind tour of a very broad brush view of AI, finally wrapping up with how I think that AI can enhance that uniquely human capacity of our brains for scientific discovery and how the two working together can actually lead to some really exciting endeavors.
I just want to put this up in front of you. You're looking out in front of you, and you know, the problem with AI, I think, is its name in the media. Of course, the two are related. If AI was called computational mathematics or, heaven forbid, applied statistics, it wouldn't make it into the media at all. So I'm thankful for the marketers behind the name because it's put my own field up there in the spotlight.
But I do want to, and I'm going to have trouble reading because I'm reading from this screen and I forgot my glasses. But as you can see, there are two very different views of AI in those first two articles. They actually appeared in the same week earlier this year. One of them from the Daily Star reads, 'Attack of the Psycho Chatbot.' Up above, it says, 'We don't know what it means, but we're scared.' The other one reads, 'It's about a human being that has finally outperformed the machine that beat the human Grandmaster at Go.' So a much more upbeat story, a story about human curiosity, a story about imagination. Basically, the human could learn how the machine was going to work, and they devised a system to beat the machine.
Down the bottom, you see a belief around some AI experts about the existential risk that AI represents to humanity. Now, I'm going to put my statistician's hat on here and say that that is not a random sample of AI experts. Those AI experts were chosen precisely because they make headlines. In fact, they did a survey of AI experts at a variety of international conferences, and less than 5% of them actually agreed with that remark. So, that's my take on artificial intelligence. I think having debunked some myths that it's anything like human intelligence, I actually want to emphasize that it's important, really important and incredibly useful, and also that it has the potential for incredible misuse.
I'm not going to pick up on the misuse, but maybe that will come up with some more talks that we have later on responsible AI. But for me, what is AI? Everybody asks me what is AI, and I'm really going to avoid answering that question in any concrete way other than to say that to me, it's the field of study or the industry that lies at the intersection of data, algorithms, and applications. I've got a few up there on the screen.
Having that definition, I'm now going to do something that is entirely artificial and totally imperfect, which is categorize AI into two classes. One of which is primarily concerned with making good predictions. The primary focus is a prediction. The other is primarily concerned with understanding causal pathways. The two are not mutually exclusive categories. If you nail the causal pathways, you will get good prediction. But sometimes, a good prediction can be really useful and helpful without understanding the causal pathway. So, the two work in a very complimentary way, and I want to talk about them and how we might combine them.
First, I'm going to put up probably the topical example of ChatGPT. That's an example of a predictive model, a predictive piece of AI. One of the wonderful things about ChatGPT, when you're asked to give a talk, you can go on, and I asked ChatGPT to describe itself, and this is what came out from ChatGPT:
'ChatGPT is a narrow form of AI. It does natural language processing. It lacks a true understanding of the text it generates.' I paused there, and I thought ChatGPT exhibits a degree of self-reflection and humility that is lacking in most human beings. So, my estimation went up enormously. As it says there, it relies on patterns and information and large amounts of information. Just to give you an idea of some of the AI lingo, its objective function is fluency and not accuracy. I have an example here of that. The question was posed by Gary Marcus, 'Why is crushed porcelain good in breast milk?' The answer came back, 'Porcelain can help balance the nutritional content of milk, providing the infant with the nutrients they need to grow and develop.' It's perfectly fluent. It sounds authoritative and plausible. It's just entirely incorrect.
It's entirely incorrect because, as ChatGPT says itself, it has no understanding of the world. That's where it falls over. It's not just large language models that are in that class of predictive models. I would also put image processing models in that predictive class, and they also lack an understanding of the world. This is a fairly standard algorithm. A picture from an AI or machine learning course or a textbook, you can find it if you Google it on the internet.
Of the left, the pig on the left, the algorithm correctly identifies it as a pig with a probability of 91. It's confident it's a pig. You add a little bit of non-random noise. I have to say that this noise was deliberately chosen to fool the algorithm. I just want to be totally upfront and transparent myself and say that, and then the same algorithm thinks it's an airplane. That pig is now an airplane. I'm sure you're all thinking that pig on the right-hand side is an airplane. I'm sure you're all thinking it's a pig.
Now, why can you, as a human, do it but the machine can't? The answer is because you have hardwired in your brain a world view, a model of the world. You know what a pig looks like, and you know what an airplane looks like, and it doesn't look like that. In this case, the human brain is much better than artificial intelligence.
However, very recently, two weeks ago, there was a wonderful article in The Economist, a great magazine for those who enjoy a good read, about Banana Boy. Banana Boy was a carbonized scroll that survived the eruption of Mount Vesuvius to be discovered several centuries ago from now but could never be read. With recent improvements in sensor technology and machine learning algorithms, just two weeks ago, they managed to decode the first word, which was 'purple.' I think that's an example of machine intelligence doing something that we as humans couldn't. We would not take the time to go through thousands of correlations and pixels, and neither do we have infrared vision. So, we could not do that.
But in general, my conclusion about predictive AI is not to dismiss it at all. I think it's enormously important and very useful. But in terms of scientific discovery, it is not a game-changer. So, what might be a game-changer? Well, what might be a game-changer is we've got all these isolated competencies in AI, but what we need is a system of AI, and we need algorithms which actually tell us what we don't know. Not just predict, not just infer causal pathways, but actually can pinpoint what we don't know so that we can embed real-time experiments.
We wrote an article in The Conversation this year about what they would look like. We have this idea of a robot. A robot who goes to explore the moon. The robot lands on the moon. The robot has it programmed in its current belief about the moon, what it looks like. Most importantly, it has an uncertainty attached to that belief. It goes in the direction that's going to explore the moon to reduce its uncertainty the most. It gets there, updates its belief. It then goes to the next place, again reducing its uncertainty all the time.
These are algorithms that tell us what we don't know to rapidly advance scientific knowledge in the smartest, time-consuming, cost-efficient way possible. Of course, these things aren't autonomous. That's the bit in the middle. It's done with scientists and other collaborators to ensure that actually the human is not just in the loop but very firmly at the helm.
In summary, AI intelligence is totally different actually from human intelligence, but it's very complimentary. The union of the two actually, I think, could lead to accelerated scientific discovery. Thank you. I actually don't have slides. We're going to have an eyeballs-to-eyeballs conversation here."