Look! It writes bad poetry!
On Sunday, the US show 60 Minutes featured an interview with Google CEO Sundar Pichai on the social implications of AI. Asked whether we’re prepared for what’s coming, Pichai said, ‘On one hand I feel, no, because ... the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology's evolving, there seems to be a mismatch.’ This mismatch is known as the Collingridge dilemma after English philosopher David Collingridge, who explored it in his 1980 book, The Social Control of Technology. The problem is that negative effects often emerge only after the technology has become pervasive. Pichai followed up by saying, ‘On the other hand, compared to any other technology, I've seen more people worried about it earlier in its life cycle. So I feel optimistic.’
Pichai’s not alone in his optimism, but he’s probably in the minority. People have been worrying about AI – and specifically about machines becoming sentient – ever since computers were invented. A Google employee was famously let go last year for suggesting that moment had already arrived. Whether it will ever come, it’s not yet here, despite the noise and the hype around large language models such as ChatGPT. (Look! It writes bad poetry!) But calling out the hype doesn’t mean there’s nothing to worry about. There really is, just on a more prosaic level. Despite even their creators not understanding fully how they work, competition among the tech giants has seen one after another released into an unready world. Google has its own chatbot, Bard, which hallucinates just as floridly as ChatGPT. And the consequences for online misinformation are deeply concerning. As Pichai says, ‘No one … in the field has yet solved the hallucination problems. All models do have this as an issue.’
60 Minutes also interviewed Demis Hassabis, CEO of Google subsidiary Deepmind Technologies. He says that humans are, ‘an infinitely adaptable species. You know, you look at today, us using all of our smartphones and other devices, and we effortlessly sort of adapt to these new technologies. And this is gonna be another one of those changes.’ It’s a limited view of adaptation, ignoring the raft of social problems created by smartphones and social media. It seems a stretch to call our incipient attempts to address these problems ‘effortless’.
Pichai says regulation is needed: how to deal with the social consequences of AI is ‘not for a company to decide.’ But when the EU’s draft AI Act was released in 2021, Google was not completely supportive. And there are doubts about whether the proposed Act deals adequately with this generation of chatbots. Now, Pichai suggests we need global governance similar to international nuclear treaties. I have doubts about how this would work, but one thing is clear: Australia also needs to take regulatory steps to address this problem, and soon.
Michael Davis, CMT Research Fellow