Vulnerable but not silly
Welcome to our newsletter. This week Tamara tackles the tension between freedom of expression and tackling the spread of misinformation in Brazil where the country’s Supreme Court has upheld a ban on X (formerly Twitter). Sacha delves into the controversial question around banning kids from social media, and Meta’s unashamed admission that it's mining our data for its own AI machine and we can’t stop them. Michael assesses the government’s latest Combatting Misinformation and Disinformation Bill, tabled this week in Parliament. And I’m looking at research showing that maybe politically motivated, AI-generated ‘deepfakes’ aren’t hitting the mark.
ACT Senator David Pocock was on the tools this last week, creating his own AI-generated deepfakes – one of Prime Minister Anthony Albanese and the other of Opposition leader Peter Dutton, in a rare, albeit fake moment of bipartisanship, proposing a full ban on gambling ads. The Senator was attempting to make the point that the government needs to ban the use of AI-generated material ahead of the next federal poll to avoid our democracy being harmed.
Putting aside the political theatrics, it’s worth thinking more about whether deepfakes have in fact impacted elections held in this year of elections around the world. Researchers at Oxford University and the University of Zurich conclude that the answer is actually, ‘not so much’: ‘early alarmist claims about AI and elections appear to have been blown out of proportion.’
The researchers noted that generative AI was anticipated to be cataclysmic – making it easier ‘to create realistic but false or misleading content at scale, with potentially catastrophic outcomes for people’s beliefs and behaviors, the public arena of information and democracy’. But when they examined if there had been an increased quantity of misinformation, an increase in the quality of misinformation and increased personalisation of misinformation, in nations where elections have been held – Pakistan, Bangladesh, India, Indonesia, Taiwan, Mexico, South Africa, the UK, Panama, and South Korea among many more – the instances and impact were significantly lower than anticipated. You can look here at a compendium of known incidents of AI-generated election-related misinformation in the nations named above.
Of course, AI is being used to generate misinformation in electoral processes. However, at this point – even in the US where the election campaign is well underway – the research shows that the level of misinformation is no greater than it usually is and that where AI-generated interference is noted, ‘these efforts have not been fruitful’. This appears to be supported by the Alan Turing Institute in the UK, which looked at 112 elections held since 2023. It found the current impact of AI on specific election results is limited, but the threats show signs of damaging the broader democratic system. ‘As of May 2024, evidence demonstrates no clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.’
All of this led the Oxford University researchers to ask why the speculation about the damage from deep fakes and other AI-generated electoral content was so off the mark? They concluded that it’s down to the fact humans are stubborn and not silly. New information (in the form of disinformation) might get them thinking, but it rarely translates to behavioural change. And with the overload of information that voters are presented with, AI-generated content isn’t cutting through. The researchers also noted that 'voters seem to not only recognise excessively tailored messages but actively dislike them.'
They concluded that there are many more things around elections we ought to be deeply concerned about – politicians peddling lies, voter disenfranchisement and, last but not least, attacks on journalists.
And in completely unrelated but nonetheless intriguing news – Nine Chief Executive Mike Sneesby is stepping down and a ‘global search’ is underway for a new boss. It’s been a turbulent time at Nine, with the departure of chair Peter Costello and the exit of some 100 journalists from the newspaper arm of the organisation.
Monica Attard, CMT Co-Director