Dirty dancing
News last week that Queensland’s Liberal National Party had released an AI-generated ‘deepfake’ video of premier Steven Miles dancing on TikTok has led to both ridicule and concern. Miles said it marked ‘a very dangerous turning point’ and declared that QLD Labor would not follow the LNP down the AI path. He might be left holding that flag though, with observers soon noting that on 4 June the federal ALP released a similar political parody video featuring Peter Dutton.
The two videos are actually pretty innocuous parodies, and both are marked – in the post, not in the videos themselves – as AI-generated. But there is still reason to be concerned if the videos signal a coming normalisation of deepfakes in political advertising. Other countries have seen growing use of AI-generated content in political communications. This includes, again relatively innocuously, using AI-generated video of dead politicians to endorse current candidates in both India and Indonesia, and on the deeply troubling side, using audio and video deepfakes of political figures to deceive voters in campaign callouts or via the media. This week, Elon Musk shared a video on X – in potential violation of the platform’s rules – featuring fabricated audio of Kamala Harris that is clearly parody but also contains statements that may mislead viewers about what Harris believes or has said.
AI-generated content can be deceptive when the context of its production or communication is obscured, as we saw when images of Donald Trump’s arrest created and shared online by Bellingcat co-founder Eliot Higgins went viral. There is added risk with political content, especially during election campaigns. At the individual level, a deceptive advertisement, or a series of them, may mislead people into changing their vote. And a political environment where deepfakes are normalised may be one of endemic distrust, granting a 'liar’s dividend' to the unscrupulous. This is surely something we want to avoid.
The Senate is currently considering laws to criminalise the creation and sharing of sexually explicit deepfakes, but Australia is yet to take any action on other high-risk uses of AI following the Department of Industry’s discussion paper on responsible AI last year. Electoral laws are also of little help. Section 329 of the Commonwealth Electoral Act proscribes deceptive communications only where they concern how to vote. South Australia and the ACT have broader prohibitions, but these are limited to paid advertising. The proposed Combatting Disinformation and Misinformation Bill may provide an avenue to address deliberately deceptive unauthorised political advertising, but it would not apply to authorised electoral advertising. In any case, since it is targeted at digital platforms, it will provide no sanction against the creators or disseminators of the deepfakes.
Despite his concern for the impact of political deepfakes on democracy, Steven Miles has ruled out stronger laws for Queensland. But federally, both parties have indicated support. The Australian people also support stronger protections. An Australia Institute poll conducted during the Voice Referendum last year found that 87 per cent of voters support such laws, and recent research from cybersecurity firm McAfee showed rising concern amongst Australian voters about the potential for deepfakes to manipulate voters. Acting on political deepfakes now would give us the opportunity to nip the problem in the bud, before our public sphere is degraded by more pernicious political ads than those featuring a little dirty dancing.
Michael Davis, CMT Research Fellow