When AI becomes A Lie
Apparently AI has ‘hallucinations’. Funny, right?
Hardly. Ethics is full of grey areas, but one of the dependable certainties is that lying, deceiving and misleading are wrong. Sure, you can lie to save a life. You can lie in the service of a collective fantasy such as a sleigh drawn by flying reindeer. But these exceptions are rare, because truth matters. Truth is how we build trust. It’s the foundation of our relationships, and our society.
Unfortunately, AI doesn’t care. It mixes bogus with bona fide, spitting out text in which truth and untruth mingle, so the two are impossible to distinguish. One small example: at the CMT, PhD candidate Christopher Hall is researching ‘platform journalism’. Keen to see what generative AI might say on the topic, he asked ChatGPT to suggest three ‘reputable sources’, which it duly did.
‘The only problem was that they were all fake,’ Chris writes. One of the sources ChatGPT recommended was a Guardian article from 2015 called, ‘The Rise of Platform Journalism.’ There is no such article at the Guardian. There is a piece under that headline, as it happens, but it appeared in 2022. In this newsletter. Written by Chris.
Flagrant falsehoods? That brings us logically to Fox News. This week Fox took a US$787m hit after it was sued for spreading false claims that the 2020 US presidential election was rigged. In a last-minute out-of-court settlement, Fox capitulated in the defamation lawsuit brought by the voting machine company, Dominion. Further lawsuits are in the works. As Dominion’s lawyers said outside the Delaware courthouse, ‘The truth matters. Lies have consequences.’ And today, acknowledging the Dominion result, Lachlan Murdoch dropped his lawsuit against Crikey, who have so far raised more than $588,000 via crowdfunding to cover legal costs. Their Gofundme page says that any surplus funds raised will go to the Alliance for Journalists’ Freedom, who are campaigning for a Media Freedom Act.
For anyone who aspires to work under the tag 'journalist', accuracy is a core tenet of good practice, embedded in the codes of conduct of the MEAA, the ABC, the Commercial TV industry, among many others. And accuracy should be a core tent for AI developers too. After all, we’re not really talking about ‘hallucinations’. As Carl T. Bergstrom and C. Brandon Ogbunu write, ‘When AI chatbots flood the world with false facts, confidently asserted, they’re not breaking down, glitching out, or hallucinating. No, they’re bullshitting.’ That BS needs to be called out and weeded out, no matter which form of media that’s spreading it. And the companies responsible must to be held to account, just as Fox News was.
Sacha Molitorisz, Senior Lecturer - UTS Law