Real or fake? How to spot misinformation
With deepfakes and misinformation becoming more convincing, it’s crucial to think critically about what we see and read. Dr Marian-Andrei Rizoiu, Associate Professor in Behavioural Data Science at UTS, shares three key questions to help you assess whether the information you come across is real or fake.
Where did it come from?
Did the information come from friends, family, colleagues or the media? Always check the source. Is it reliable and trustworthy? Consider whether this source might have any biases.
Traditional news outlets follow regulations and quality standards. According to Dr Rizoiu’s research, these news sources generally provide complete and safer-to-consume information. However, keep in mind that even established news publishers can have political biases, shaped by their history or ownership. Tools like the AllSides media bias chart or Media Bias Fact Check can help assess bias and reliability.
On the other hand, unregulated sources like blogs or social media are more prone to spreading incomplete or biased information, as they don’t have the same editorial oversight and rigorous standards of journalistic integrity found in traditional media.
Can it be verified?
It might seem simple, but Google it! Cross-check the information with other credible news sources or research studies. If it stands alone, or is inconsistent with other reliable outlets, it’s likely inaccurate.
Social media often shows biased, curated versions of the news that fits specific beliefs. With 69% of US adults using social media for their news, many see cherry-picked content rather than objective, balanced coverage.
This echo chamber effect is worsened by algorithms in social media and search engines, which recommend content similar to what users have already engaged with. To break free from this information bubble, Dr Rizoiu suggests we actively seek out alternative viewpoints. Search engines like Google will present the same information across multiple sources. Even reviewing just the first page of search results, including the page ranking of sources, can provide useful clues. If multiple credible sources contradict the claim, consider it a red flag. For a truly objective search, consider deleting your browsing history first.
However, even using Google has its limitations. A recent study found that web searches can reinforce belief in misinformation, particularly when users rely on low-quality sources. This happens because people often search for information that aligns with their existing beliefs, not to find balanced views.
Sometimes, misinformation producers invent terms, which leads to them dominating search results for those terms. For example, the term "adrenal fatigue" was created to promote specific misinformation, and for a period, the only Google results came from sources promoting that concept.
Does it sound too good (or bad) to be true?
If something feels too extreme, either positive or negative, it probably is. Sensational headlines, emotional manipulation and clickbait are designed to grab attention and draw you in. Be wary of professional-looking visuals or polished content, as these can mask misinformation.
Generative AI tools, like ChatGPT, has made it easier to create convincing content, and even harder to spot misinformation. Gone are the days when spam had obvious spelling and grammar errors. Now, everything can look flawless, significantly impacting our defence against misinformation.
Dr Rizoiu’s research also highlights the use of “junk science”, where legitimate research is distorted to support false claims.
Misinformation often uses seemingly credible sources to create a misleading narrative. This includes referencing reputable news articles or academic studies but removing context and nuance to distort the findings — sometimes intentionally, or simply due to a lack of understanding.
In a world where deepfakes and misinformation are increasingly difficult to spot, critical thinking is your best defence. By questioning the source, verifying information across credible outlets, and being wary of sensational claims, you can better navigate today’s overwhelming flow of information.
Want to dive deeper? Join Dr Marian-Andrei Rizoiu and a panel of experts at SXSW Sydney 2024 for an in-depth discussion on the spread and impact of deepfakes, fake news and misinformation on a global scale.
Want to learn more about the spread of misinformation in our digital world? Check out our Curiosities series on YouTube.
Curiosities
Social Curious Episode 6
The spread of misinformation in our digital world
with Associate Professor Marian-Andrei Rizoiu
00:00:00:00 - 00:00:24:07
Hello curious people. I'm Dr Marian-Andrei Rizoiu. I'm an Associate Professor in the behavioural data science lab here. Yes. I'm here to answer your curious questions about the spread of misinformation in our digital world. This is social, curious.
00:00:24:09 - 00:00:48:10
Our UTS community have sent in some thought provoking questions to tackle. So let's get started. I know that social media runs on algorithms. What does that actually mean? This is a very good question. In fact, social media is, our first attempt or it's the first time when we are mediating social interaction using a digital technology and digital technologies, as you know, they run on algorithms.
00:00:48:12 - 00:01:11:02
There's plenty of them. How we interact, how we save our data, the type of data that we work with. But one of them is by far the most important that is shaping a lot of the interactions online, and that is the online recommender engine. The purpose of the recommender engine is to help us navigate the digital landscape of our online social media, by recommending content that we might prefer.
00:01:11:04 - 00:01:35:06
For example, you may be on your content streaming, your favourite video content streaming platform, and you don't know what to watch tonight. So what does the recommender engine do? Well, it analyses your previous your previous views, it also looks at how similar you are to other viewers on the platform. Maybe you have friends on the platform. So it looks also at what you what they look and what they prefer to watch.
00:01:35:10 - 00:01:54:14
It aggregates all of that, and it recommends to you a couple of pieces of content for you to watch and that is how it works. Where do we also see recommender engines? Well, you do your online your shopping online, maybe you're buying a new gadget, after, when you're searching for that gadget, there is always going to be recommendations of,
00:01:54:14 - 00:02:28:20
see also or frequently bought together. Those, the same principles apply there. The recommender engine looks into what you intend to buy and then aggregates preferences of previous buyers together with your own personal tastes, and creates a personalised recommendation. Is there a formula for viral content? Now that is a great question. And before we can actually tackle that, before I can tell you about viral content, I actually have to tell you why, interactions on online social media are special and why are they mimicking some of the things we've already seen in the online world?
00:02:28:22 - 00:02:44:22
So the way information travels in, in the offline world, in the back in the days, was that people would tell news to their friends and families and they would go on and tell it to their own friends. In the digital media and in the platforms, we actually are mimicking the word of mouth process, a digital word of mouth.
00:02:45:03 - 00:03:09:16
And what better way to explain how the word of mouth actually works digitally then, by using these lovely offline real world icons. So let's assume that our user, you're writing a Facebook post about, a new hobby. The moment when you're posting the post, people around you, your friends, your friends will then see it in their feeds. They will like it.
00:03:09:17 - 00:03:31:13
They will put the like button or they will comment. What that leads to is that by liking it, they push that content into the feed of their friends who are not directly connected to you. So that leads to a second generation spread and their friends, they will spread the content to their own friends, therefore leading to increasing number of generations.
00:03:31:19 - 00:03:57:10
Therefore, the digital word of mouth process. Now the formula for viral content. Well, we don't really know exactly how it works because it's a mix of two things. It is a mix of quality, and it is a mix of pure luck and timing. So yes, if the original content is, is of a high quality, of course it will spread more because people are more inclined to like it and therefore pushing it in their networks.
00:03:57:12 - 00:04:23:11
But also who your friends are also is very important. If one of your friends is a very highly followed person, say an influencer, they will have thousands, maybe tens of thousands of followers. Therefore, the likelihood of the content spreading more widely and wildly is going to increase. So what is the key to virality? Well, writing good content is good, but having influential friends is even better.
00:04:23:12 - 00:04:50:20
Why does social media feel so extreme? Well, that is actually through the design of the social media. Now remember that it is a digitised approach to information exchange. And back in the days go back a couple of decades, maybe 20, 30 years. The people we would interact most were the people that were physically close to us. So if you lived in a in a small town, that would be your family, your friends, maybe the people you bumped into at the grocery store.
00:04:50:22 - 00:05:13:17
But these days, with online social media, we can discuss with people on other continents in real time. What that means is that we get exposed to all sorts of views in real time. And we also know that on social media, most people they don't really post. We call these, lurkers. They tend to they tend to consume content, but they are not very vocal.
00:05:13:19 - 00:05:54:18
However, there is a vocal, a very vocal minority out there. But now everyone is suddenly tuned to these vocal minorities, which may make them seem actually larger than they are. Add on top of it the fact that there is, what we call homophilic links, meaning that we tend to structure in groups based on preferences. We will surround ourselves with people that are similar to us in taste and worldviews, and we will go and we create, Facebook groups and WhatsApp chat groups, where we tend to exchange views and in these safe environments, sometimes that leads to the proliferation of all sorts of online toxic content.
00:05:54:22 - 00:06:22:16
How do biases drive the spread of misinformation? Misinformation is, is defined as information which is not accurate or fact based. But typically it is spread not on purpose. So this would be people spreading information because they truly believe it's it's true. In fact, it doesn't have any scientific backing. Now add on top of it the fact that we know that more than 60% of the adult population actually consumes, news from online social media.
00:06:22:18 - 00:06:47:16
They no longer go to the websites of the major news publishers, but they will consume the news from social media. But where do they spend most of their time? They spend most of the time in these homophilic, in these homophilic bubbles. So they essentially consume the information that is posted by their friends. These bubbles, not only do they serve you information with this particular slant, but they also act essentially as filters.
00:06:47:17 - 00:07:14:02
We also call them filter bubbles. What that means is that the alternative views. So, the alternative views of the information, they never make it within the bubble. So they essentially shield the members within the groups from the other types of information, which means that in time these groups can slide into more extreme views and they still they sort of disconnect from the, from the main, from the main narrative.
00:07:14:04 - 00:07:41:24
Is it possible to create an ethical algorithm for social media platforms? The answer of that question is sort of yes. We could, of course, ask the recommender engine to abide, to certain principles. However, we need to also understand that any constraints that we add to a recommender engine actually reduces its overall efficacy. The reason for this is that the purpose of the recommender engine is to optimise one single measure.
00:07:42:01 - 00:08:07:21
So it is trained usually using machine learning approaches to perform it optimally because it is a single measure. But now we are starting if we want to introduce additional, ethical, directives, it is it needs to essentially optimise multiple measures at the same time, which means that the solution that it arrives to will not be the best solution or on every single dimension, in isolation.
00:08:07:23 - 00:08:36:00
What does that mean? So on it's the original task to increase, engagement, therefore increase monetisation. An ethical approach is likely not impossible to perform, with lower, efficacy. That means that there is very little monetary incentive for the platforms to actually do it, because it reduces their financial positions. How do social media algorithms make false information spread more?
00:08:36:03 - 00:08:58:15
We need to start by acknowledging that, the recommendation algorithms and the algorithms in general, they do not prefer to spread misinformation, so they are not designed to spread misinformation. In fact, they are not designed to spread any type of the particular type of information. They are designed for one thing only, which is increase engagement, but it will feed us the information we engage most with.
00:08:58:16 - 00:09:22:06
So if we tend to consume and engage more with a particular slant of the information, say with the misinformation, it will get noticed by the recommender engine, which in turn will start recommending more of the same because, well, it is more engaging. And that is one of the pathways through which, automatic recommender engines can actually end up reinforcing the spread of misinformation.
00:09:22:08 - 00:09:51:09
What steps do platforms or creators take to counteract misinformation campaigns? Well, there's a number of measures that platforms can take, and they vary from, say, hard moderation, deplatforming or soft moderation. Hard moderation essentially goes to, removing content from the platform that's typically restricted when content is clearly toxic or illegal. Then, deplatforming usually goes when, users are being suspended from the platform.
00:09:51:14 - 00:10:33:12
Again, it's reserved for people that break the terms of services or who perform illegal things. So these are pretty harsh measures reserved for when it is quite obvious that the activity is wrong. Misinformation is actually a continuum, so it is difficult to apply these, these strong measures on things that can be classified as humour or sarcasm. Another measure which tends to work a bit better in this case is what we call, soft moderation or, soft moderation essentially puts a warning on top of the content that that content contains, claims that have been debunked together with a link, to the to the debunking.
00:10:33:14 - 00:10:57:22
Now, all of these are what we call reactive measures. So the misinformation arrives and then we do something to respond to it. However, what what is known to work best is pre-emptive measures, pre-emptive measures, essentially teaching the users to use critical thinking to stem misinformation before it reaches them. You’re a data scientists. What worries you most about misinformation and deepfakes?
00:10:57:24 - 00:11:27:17
My main source of worry stems from the reach of information. So remember when we discussed that using online environments, you can get real time access to almost any type of information out there, including the more extreme views. Now exposure, repeated exposure to such times of more extreme type of information will lead to changes in our perception, which then at the societal level, actually leads to what we call offline effects.
00:11:27:19 - 00:11:53:02
Things like, we lose our trust or perceived trust in our democratic institutions because increasingly extreme and vocal voices online keep bombarding us with that information. It's not what the majority necessarily thinks, but we keep getting exposed on and on to that type of information. Now, deepfakes and generative, generative technologies in general open up a lot of opportunities for us.
00:11:53:02 - 00:12:29:10
But they also open up challenges because nowadays, technology is accessible to almost anyone with, minimum to medium technology skills, which now means that almost anyone can produce a very believable, deepfake of, of a politician or a public figure saying outrageous things. We've already seen this repeatedly with, with political fears. So what? That will only contribute to make, the mistrust even worse, because now we know no longer are we sure about the information, but not even the content that we're seeing.
00:12:29:10 - 00:13:06:22
We can't be sure that that political person actually said the things that the video, the video says. And the final point is actually being exposed to proper extremist content, which would normally, in even 2 or 3 decades ago, it would've been quite rare to be exposed to such content. But these days it's it's out there and it is most worrying for our, for the young generation, for the children, the teenagers who barely open access to internet and they are already bombarded with toxic masculinity, extremist views, dangerous teen challenges.
00:13:07:02 - 00:13:31:05
These are all very toxic content that exists out there freely, and they can access it with just a computer in the in their room. That's what worries me as a data scientist. That was all the questions for today, I hope you learned something new. Until next time, stay curious.