UTS Media Salon: The Journalist and The Scholar is an event-series that unites one top international journalist or editor with a key academic for a lively discussion on a big picture topic.
UTS Media Salon
'Crises on a global scale' with Katharine Viner and Dr Seeta Peña Gangadharan
For this first instalment in the series, join Editor-in-Chief of The Guardian Katharine Viner, and Associate Professor at the London School of Economics and Political Science Dr Seeta Peña Gangadharan, as they discuss the topic 'Crises on a global scale': fighting for change in the post-pandemic world.
Well, hello, everyone. We will get started now.
And I'd like to virtually welcome you to UTS and our first
panel in our new international online series,
the UTS Media Salon: The Journalist and the Scholar.
My name is Christine Kearney,
and I'm a journalist and lecturer in digital journalism here at the University
of Technology in Sydney.
We have two very accomplished guests joining us today,
Editor-in-Chief of The Guardian,
Katharine Viner, and London School of Economics and Political Science
Associate Professor Dr. Seeta Peña Gangadharan.
But before I introduce them further,
I'd like to make an acknowledgement of country.
I would like to acknowledge the Gadigal people of the Eora Nation.
Upon whose ancestral lands our city campus now stands.
I would also like to pay respect to the elders both past and present
acknowledging them as the traditional custodians of knowledge for this land.
I would further like to acknowledge the traditional custodians of the various
ancestral lands from which our other attendees join us today
and to pay respects to those elders past and present.
Now I'm most excited to properly introduce our journalist and scholar
today for our inaugural panel in this series,
confronting crises on a global scale fighting for change in the
post pandemic world. But before I do a little bit of housekeeping,
I'd like to invite you all to use the hashtag UTS
Media Salon, today.
And to please post any questions for the panelists into the Q and A
box at any time as we move through the discussion. And I will,
will leave some time at the end to answer them. So,
but please feel free to post any questions at any time. Okay.
Now I'm going to unshare my screen as I introduce our panelists properly
and cue them in,
Okay. So our first panelist is more than just a
journalist.
Katharine Viner has held the position of Editor in Chief of The Guardian since
June 2015, after joining as a writer in 1997.
She was appointed deputy Editor in 2008.
She launched the award-winning Guardian Australia in 2013 and was
also editor of Guardian US, based in New York.
Katharine gave the 2013 AN Smith lecture in journalism at the University of
Melbourne, called the rise of the reader,
discussing journalism in the age of the open web. And a speech on
truth and reality in a hyper-connected world,
as part of the Oxford University women of achievement lecture series in 2016,
she is the winner of the 2017 Diario Madrid prize for journalism,
for her long read how technology disrupted the truth.
And Katharine, I believe you have just returned from COP 26 in Glasgow,
and it's very early in the morning for you in London. So welcome.
And I hope you have been able to have a cup of tea.
Yeah, I was.
Uh, toasting you with an Australian style flat white this morning. So
essential for this to, for the proceedings to take place.
How we love a flat white here in Australia. Um,
our second guest today is also more than just a scholar. Dr Seeta Peña
Gangadharan, is Associate Professor in the department of media and communications at
the London School of Economics and Political Science.
Her work focuses on inclusion, exclusion, and marginalization,
as well as questions around democracy, social justice,
and technological governance.
She's also the visiting scholar in the school of media studies at The New School
affiliated, um, fellow of Yale Law School's Information Society
Project, and affiliate fellow of the data and research data and society
research Institute. She currently co-leads two projects, our data bodies,
which examines the impact of data collection and data driven technologies on
marginalized communities in the US, and justice equity and
technology, which explores impacts of data driven technologies and
infrastructures on European civil society. Before joining LSE,
she was a senior research fellow at new America's open technology Institute,
addressing policies and practices related to digital inclusion,
privacy, and big data.
Her 2019 Ted talk is called technologies of control,
and our right of refusal. And Seeta,
I know it's even earlier in the morning for where you are joining from in
Boston.
So I know you are probably needing far more than teas and very strong coffee
I presume, and a very warm welcome to you too.
Thank you so much. It's just a water for me this morning.
All right. So, um, let's get going, Katharine
we might start with you.
The very name for this panel is intended to follow up on a recent essay you
wrote for The Guardian on how media can build a better world beyond COVID
where you pointed out the changes that have occurred throughout history after
previous global upheavals. You quote
various academics and writers suggesting the after effect of a global shock
depends on the prevailing mood of the times that, and I quote,
you here "positive changes do not automatically emerge from periods of crisis.
You have to fight for them." So broadly speaking,
could you expand on this?
What's the current mood for change and what kind of living fights so to
speak? Do we have on our hands?
Well, I mean, I think there's quite a lot of fights on our hands. Um, uh,
and you know, I feel even when I, from when I wrote back, which was in, uh,
April, May, that, it's got even tougher, I think, you know,
I think the,
what I was trying to get at is that I think we shouldn't just say we're going to
return to the way things are, and that should be our aspiration.
The best thing out of the pandemic is we can get back to how things were.
I don't think that should be our aspiration, you know, um, uh,
it was pretty bad. It was pretty unequal. Um, and you know, are we,
are we just hoping to get back to that, but with more screens,
because everyone's spent longer with screens, but with more surveillance, um,
and I know that's something that Seeta will,
I'm sure be talking about as much more expert than me. Are we,
is that the, is that the best, our aspiration, just get to that.
There were loads of things that were wrong with society, and I think, you know,
we need to, um, you know, the, the right often see,
you know, Naomi Klein called it the shock doctrine.
They see an opportunity in a crisis to, um, take more power from themselves.
I think it's important as well, that, uh,
that we sort of fight for the common good, you know,
fight for public space. Um, and, and, you know,
and put pressure on, uh,
all of these companies from fossil fuel companies, extractors all the way
down to, you know, how we tax the super rich to try and fight for,
for a better society. I admit it,
it doesn't look like it's happening so much as I'd hoped, but I still think, uh,
you never know how things are gonna end up and you just have to keep,
you just have to keep fighting.
Katharine, I might just follow up here, follow up with you after that. Um,
in the essay you also speak of the pandemic in a positive light,
and it's effect of giving us fresh ideas and new energy, um,
for tackling these sort of looming crises.
And as you've just noted you quote warnings from the likes of Arundhati Roy and
Naomi Klein,
who said Grenfell was a rehearsal for COVID just as COVID is a
rehearsal for climate breakdown. If we don't radically change course.
So how has the climate debate and COP 26, where I know you just came
from, an example
or not of how we can learn from the pandemic to bring new global approaches.
I know Boris Johnson said yesterday and his ongoing football match score on the
climate emergency that the world is now down five to two or five to three and
not five to one.
So how can global solutions found for the pandemic, for example,
inspire other looming crises, such as climate, do you think?
And it's so interesting.
Boris Johnson has shown no interest in football throughout his life,
and suddenly it's using these slightly bewildering metaphors,
but at least he is talking about the environment in a way, but I mean,
he just seemed to have, um, had an epiphany,
let's see if it affects his policies. Cause that's where,
that's where it really matters. I mean, I guess, I mean, I would never say that,
that I would describe the pandemic as a positive thing. I would never say that,
but, um, but I think out of it,
there was something where everybody just sort of stopped right. Then, you know,
those particular few weeks in the UK, um, I know, uh,
more recently in, in some cities in Australia,
the lockdowns just where everything stopped and with Aus that inability to leave
the country and so on, uh, the,
the stopping of flying the stopping of movement, they're sort of thinking, well,
what really matters? The fact that I can't see this person that I love,
that's what matters most to me, it's not that I can't, uh,
fly all around the world, um,
on vacation or that it's not that I can't, uh, you know,
go out endlessly, um, uh, in, in sort of fossil fuel driven,
fun, um, you know, and I think all of those things,
and it sounds slightly naive when you say it, but I really felt it,
those that those few weeks of sort of silence and Birdsong and how so many
people, uh,
found solace in nature and sort of really noticed nature the first time there was
all that great stuff about bird song, actually getting louder.
It wasn't just that it felt loud that it got louder because everything else was
so quiet.
And I think there's stuff to learn in that that's deep pleasure that people got
from that, you know, that was incredibly beautiful,
rewarding connections with nature, connections with each other,
who we really missed, what, what really, it felt like emotionally.
And, um, um,
and so I think we need to try and get some of that into our public
policies, rather than this, this endless quest for more growth,
more consumption, more everything.
Okay, great. And so public policy is possibly one of the looming crises.
Seeta,
I might bring you in here as we talk about sort of big picture ideas during and
coming out of the pandemic. Um,
Katharine cites in her essay academics who call this the first worldwide
digitally witnessed pandemic.
And indeed you're an expert in digital spheres and have championed equality in
the digital world.
So what are your views on technology's impact or role during
the pandemic? And again, broadly speaking,
how can we learn from this to help us with any looming crisis
or crises that you might see.
At 10 past four, quarter past four in the morning.
I want to try and strike a somewhat optimistic note,
but I can say very genuinely that, um,
from where I stand and the research that I do and the work that I do in
collaboration with many communities, both in Europe and in the United States,
um, things do really honestly feel quite grim. Um,
for example, here in the US um,
I'm very much a part of the conversations
around how to reign in the power of big tech
and, um, you know, as, um,
Jeff Bezos is going to outer space, right, it's, it's almost,
uh, to build on that metaphor of, of, um,
movement that you've alluded to Katharine, just thinking about how we,
we are seeing the most extreme, um,
wealth disparity of our time being manifest in this journey to outer space
while, um, many people,
many populations will be, um,
further entrenched,
stuck in their places and spaces as a result of the
pandemic. And so what I, um,
have sort of arrived at is
in our interactions with tech or our relationship to
technology has I think very much transformed
in this, um, in,
in the darkest months of the pandemic. Um,
partly because we were able to see the tools
that many of us have come to understand as information
and communication tools as fundamentally surveillance and
extractive tools, both surveillance, you know, whether that's, um,
tied to your workplace, monitoring your performance,
um, throughout the day throughout the Workday, or if it's, um,
the use of technology. I mean, for many populations, right? The,
the surveillance use of technology is sort of par for the course, but for, um,
every day citizens and everyday consumers, I think this sort of sense of, uh,
actually
I'm on a screen for far longer than I would have enjoyed.
And, uh, what's, what's happening on the other side of that screen.
And so I feel we're at an inflection point where for,
you know, 25 odd years, we've been, uh,
we've had this understanding of technology being a democratizing tool.
And it has felt like in the last year and a half, almost two years,
now that technology,
at least these communicative technologies have really become disciplinary
technologies. And again, the,
the sheer wealth that has accumulated, um,
by Amazon, Alphabet, Apple, Microsoft, Facebook,
right. Um, the myriad companies that are lesser known,
um,
but provide our computation or keep our computational infrastructure running.
Right. That there's something grossly, uh,
awry here. And, um, yeah, so I'm,
I'm at a moment where it's,
it's still sort of exciting to build on some of the
organizing and sort of, um, uh,
collective action that we saw over the last year and a half that we saw over
almost the last two years. Um, but at the same time,
um, there's certainly a lot of work to be done. So,
um, yeah.
Well Seeta,
I might just follow up with you there as you speak of big tech.
Um, you know, you do speak of it routinely,
how it uses dark strategies to lure us in and, you know,
solutions are not just about changing rules to enforce better algorithms,
um, but about bigger solutions that perhaps rejecting what's currently on offer.
I've heard you say that, and you know,
that the internet of the 1990s is very different from today,
that you once believed in the power of the internet for greater inclusion and
equality, but now you study how it enforces inequality.
So how do we move forward to make sure people are getting access to good
information and other digital benefits, for example,
and that companies, as you often note,
to not exacerbating this cycle of disadvantage that you talk about.
I think there's two things. One, I mean, I'll come back to a point,
I think in Katharine's essay from May, which is, um,
this point about empathy and, uh,
many of us are steeped within a sort of filter
based, um, you know, uh,
conglomeration driven type of communicative environment, right?
We're part of social media. So there's in that culture,
a sort of inbuilt insularity that happens and
the sort of common understanding of each
other has, I mean, that that's become difficult,
but to a certain extent that some of that melted away with the pandemic, right.
Um, this feeling that actually the person,
not only am I going to take time to learn my neighbor's name, um,
but my neighbor is actually experiencing the same thing as I am experiencing.
And, and that is incredibly important, important, right?
This idea that actually, oh, yes,
we do live on this planet together and there are some things that we have to
work out together. Um, so, um,
there's that aspect, but yes,
at the same time, um,
we've sort of evolved a lobbying culture, um,
a game of politics that have really put, um,
technology companies in the driver's seat to the extent that, um,
whether it's for the development of health applications or,
um, educational provision or, um, public safety,
right?
Even state institutions have become more dependent on these technology
companies. So what do we do need to do? We need to really inspect, um, you know,
we need a moment of introspection to sort of reflect on
whether the path that has been laid before us,
in terms of technological development, technological deployment,
and use is the one that we want,
or whether we need to use not just antitrust or
competition tools to intervene in the power
that technology companies have acquired,
but all of the tools that are at our disposal to
really, um,
kind of reroute our understanding, uh,
and our relationship to technology, so that it's not this disciplinary, right.
I'm going to punish you because you are not in the system correctly,
as you move through one country to another, or, you know,
as you move through your Workday, et cetera, it's actually to,
to renew that sensibility of, uh,
or that democratic, democratic sensibility that, that, um, we,
we have come to expect of our technologies.
Yeah. Um, Katharine, I'm,
I like to bring you back in here now as Seeta talks about big tech,
um, because you also noted in your essay,
the former Guardian Editor CP Scott, who talks about a newspaper,
not just as a media business, but as an institution,
which should influence the whole community and has a moral as well as material
existence. So I'm wondering as Seeta talks about the impact of tech.
What about the impact of tech on media?
How do you think mainstream media can better influence this whole community,
especially when competing so heavily for communities with tech companies now,
including the likes of Facebook,
or should I say Meta and how can traditional or mainstream media
compete with big tech who are not as beholden to the media standards or ethics
or moral values that CP Scott talked about?
CP Scott, he was, uh,
I was about to say he was editor for a hundred years.
He was actually editor for 57 years. So my, and, uh, you know,
it was an absolutely, uh, visionary editor. He wrote an essay in 1921,
that is unbelievably astute when you read it now. Um,
but I particularly love at the moment. I'm very, I read him all the time,
but this line about newspapers having a moral as well as a material
existence.
And I would just love big tech to sit down and really think about what that
would mean for them. And, um, you know, Seeta talks about
the limits of the sort of antitrust, um, uh, point.
And I really agree with that. You know,
this focus is always on breaking up big tech, making them smaller,
and that's obviously, uh, is, uh, an important idea,
but it's not just about the science,
it's about what they do and how they behave. You know, if, um, you know,
if you're talking about sort of the, um, unseen prejudices buried in the, um,
algorithms, um, all the way through to, um,
the thing that I worry about the most, which is how, um,
the business model particularly of Facebook, uh, drives people into,
um, extreme positions very, very quickly.
There's been some incredible research about how quickly you are taken from,
uh, from quite mild, uh, content, uh,
to quite extreme, um, uh, content. And, you know,
I think we news organizations take responsibility for what we
publish now. Not all of them might do that to the extent we might wish. Um,
um, um, but, um, the good ones do. And, um,
and I think the, the, the idea of, you know, big technology realizing that,
that what they, what they publish, they have something to do with it.
They haven't just sort of passively provided a space they've instead built, um,
you know,
some of the most profitable businesses ever seen on the planet out of,
uh, out of that, uh, technologies and, um, you know,
they could have a bit less money and have a bit more moral, uh, existence.
Exactly Katharine, because I know you've, just to follow up with you,
I know you've talked about how technology has disrupted truth in another
piece of a couple of years ago, um,
and misinformation and social media takes influence over democracy.
And of course,
we've had the Cambridge Analytica scandal and many others before and since. Um,
and you've also pointed out how many, how reporters, you know,
had done great work during the pandemic, in reporting reliable and truthful information.
Um, but I'm thinking about a quote again,
here from you that the digital revolution might change how we interpret
journalism's mission for the modern age. So audiences, um, uh,
no longer separate from journalists, we have a global one,
and it lives as you say, in polarizing spheres.
So how do we get journalists? You know,
how did journalists capture this polarized audience?
And we often talk about trust, building trust,
how do journalists build trust with those who have lost it in institutions,
including media ones, which we have seen during the pandemic.
What do you think about that?
I think, I think, I think it is a very big challenge for, for journalists. Um,
and partly because the kind of, um, you know,
the sea you swim in has changed so much, it has become, um, you know,
in your Facebook feed, for example,
you can't tell whether something is from a reputable source that
is transparently funded and, um,
and will correct its mistakes,
or whether it's just some junk website,
that's just trying to get clicks for cash, or whether it's from
some propagandist, um, paid for,
by a state or some other actor who is out to influence the, um,
the scene for, for worse. So, um, I think then that's,
you know, we may be angry with technology companies for that,
but it also puts greater pressure on us to, uh, be trusted.
Now I do think the pandemic has, um, driven, uh, uh,
audiences back to sites they trust. I mean, I think they, you know,
when they want valuable information about how to protect themselves
from, uh, the virus, then they are going to go to more reputable sites.
And I think that's probably why, um,
there's been such gigantic audiences to reputable sites like The Guardian and
many others, the BBC, the ABC, um, over, over the pandemic,
um, you know, communities need good information and in a pandemic,
misinformation is even more dangerous than it was before. It's, you know,
it used to be that misinformation just meant that your,
your weird cousin got completely obsessed with conspiracy theories now it's,
it means, you know, it's deadly, right? They, they, um, uh, it could,
it could result in something very serious. So I think, um,
misinformation has become deadly. And I think,
it puts greater pressure on journalists to, um,
sort of earn their trust. So I think, you know,
some of the things I often talk about about, um,
is that I think journalists need to be part of the communities we report on,
I think historically, um,
perhaps in the recent past anyway. Um, but, and I think print did this.
It meant, you know,
if you were just publishing a newspaper once a day or a few editions in the
evening, then you would give out: here's what I know.
Here's what I think. You grateful, population, go read it. And,
um, you can write a lecture if you like, but,
and I think the now the way that it's, um,
it's a much more interactive experience, for better and for worse, uh, you know,
I do think that it's beholden of journalists to be, um,
not just accountable, but also part of the community to report on. So, um,
and that leads to why it's so much more important for newsrooms to be much more
diverse than they were in the past. You know,
if we're going to report on what's happening in communities, um,
we should have teams that are from those communities, partly because you know,
what you define as news,
I find something I find very interesting is when you talk to different
communities about what they consider newsworthy,
it's really different. They all really have strong views on it.
They all really have a strong sense of what is newsworthy,
but it's all really different.
So if you just hire a group of people from the same narrow stretch of society,
you get exactly the same view of what's news. So, you know, you could,
I sometimes joke that the best argument by the diversity is, um, is cause it's,
it's, um, makes your news coverage far less boring. You know,
there's so many more interesting things to talk about when you've got people
bringing in things from all different places.
I think that makes you trust trusted.
I think when communities see things that are important to them not being
reported and they start, they, they lacked trust, you know, um, in,
in the institutions. Um,
and I think there's some also some interesting stuff about understanding the
value of journalism as distinct from activism. You know,
it's really, really important that, um, we report things, uh,
that are true. Um, even if we don't like them, you know,
an activist might ignore facts they don't like, but, you know,
journalists have to tell the whole story in a straight away as possible. Um,
I think.
Very true about, um,
diversity and how to build trust in audiences. Seeta, I'm wondering,
you know, we talk about ethics and ideals in media standards.
So what about post COVID digital ethics,
which is something that you think about, or obviously a lot you've said,
for example,
that you worry about technological dependencies that it created
once technologies are in place.
And of course we naturally think of track and trace apps that
you've talked about.
Are we going to pass post COVID of normalizing this kind of technology and what
is to quote you, 'the messy path versus a more responsible path ahead', you did
mention this a little bit earlier,
what kind of formal oversight might you recommend in terms of digital privacy,
noting even Facebook, uh, in the last day,
announcing it's shutting down its facial recognition system. What do you think.
About, um, the path ahead? Yes.
Again, that this early in the morning, I'm going to try and be more optimistic.
It's too early to be dark. Um, but I guess, um,
you know,
I kind of want to come back to something that Katharine was saying about sort of
the visibility of, um,
journalists and storytelling within communities and how important
that is in general, right. Of all of the, um,
sort of public institutions, uh,
related to information and to a certain extent communication
that probably is, has, uh, um,
escaped the pandemic, um,
in a, in a stable place has been the public library system,
at least in the United States, right?
The library's still a very trusted institution.
People can lean on it to access the information that they need.
It has a physical appearance in communities and
not just a digital one though in the United States and many, um,
of the, um, big, bigger library systems, right.
There are circulating digital devices and so forth to extend
the library. And, um, when I think about ethics,
uh,
I often think about the limitations of ethical practices,
right? In the 1990s, there was a real push in the United States,
especially to sort of cultivate these ethical practices
within newsrooms, um, with ombuds persons, uh, in, uh,
in with respect to public journalism.
And those are all really important,
but the actual practice of storytelling in a
community,
I think needs to be recognized and elevated in a way that it hasn't right.
It's really actually comforting for people to access the
information that they need,
and also see themselves reflected in stories about their
community rather than be cut off or excluded. And,
um, you know, were living in a time where,
uh,
big tech has exceeded to a certain extent,
big oil and, you know,
other big configurations of industry,
to the extent they, they have a presence in communities, right.
They sponsor a charitable events.
They promise tens of thousands of new jobs and communities
when they're setting up a fulfillment center or things of that nature.
And, um, journalists and media institutions, don't,
that's difficult to keep pace with. Right. Um,
it's not as if I see The Guardian in my everyday
existence when I'm in London, per se, right. Um,
around me in my day-to-day interactions.
And so I'm wondering if there's something there to think about,
um, that helps us, uh,
reposition the importance of information and,
and narrative, right.
It's not just the algorithm that matters to us actually,
it's that storytelling and that information gathering practice.
And so it's more not,
I guess my recommendations are not necessarily around thinking about,
um, sort of routines in the newsroom. Um,
but that intersection or that touchpoint between
communities and media practitioners, because it's right,
as we've been saying, we're in a highly polarized,
uh, um,
insular environment in many of our communication spaces.
And so thinking about the practices that get us beyond that are really
important, and I'd be curious to think about how, um, and, and,
um, come to understand how, for example,
younger generations are interacting with the news and info
interacting with, um,
with journalism as the reputation of Facebook
tanks, for example, right. That,
that desire to find information and to connect to one's community is inherent,
I think, to our existence as humans. Right. And so I,
I feel like we probably have quite a bit to learn from young people just in
terms of their, their, um, their news habits.
Yes. And that the, the,
what you were saying before about the importance of libraries,
even as institutions reminds me of a great Zadie Smith space written a couple of
years ago, or you might've read it, but just about libraries in the UK.
Um,
I just might like to remind everyone if you'd like to post any
questions into our Q and A box. Um,
I can see one coming in here, please do so. Um,
we have about five, five to 10 minutes left in the, um, in the webinar.
So please feel free to type any questions into
the Q and A box. And just before we go to those Katharine, um,
I might just follow up what Seeta was saying there at the end, in terms of young,
um, you know, trying to attract young audiences, uh,
how does The Guardian go about trying to, um, you know,
attract those under 30, under 25? That sort of audience.
Um, well, we actually have a really, uh, large number of,
um, uh, young readers and, um, you know,
and what I found is that, you know, you'll see sites that say, oh,
they proudly say we are the website for young audiences or whatever,
and we always have more younger readers than they do.
And I think there's a couple of reasons for that.
I think one is because we're free. I mean,
part of our business model is that we say, we're trying to,
we're trying to keep journalism,
good quality journalism available for everyone so that you don't have to, uh,
pay quite a lot of money, which is a lot, a lot of, a lot of, uh,
news websites are quite expensive. Um,
we encourage people who have got the money to pay so that other people can
read it for free. So I think that's a big driver, but also, and this is back to,
um, Seeta's really interesting point is that I think young people can see
themselves in our reporting. You know,
we have been focusing on the environment for a long time,
much longer than Boris Johnson. We,
we were focusing on the environment when
Boris Johnson was a climate denier. So, um, you know, it's, um, you know, we,
we, we focus on identity politics.
We focus on inequality, we focus on, you know, um,
a broad range of, uh, pop culture, internet culture, and so on. And so we,
and we have lots of young journalists as well. So I think, you know,
I think there's lots of reasons that people,
that young audiences find themselves in The Guardian. Um,
that's not to say though that, um, you know, um, I take that for granted. Um,
I think something that I know to be true is that, you know,
young audiences don't generally, um, read, uh, well,
they don't read a newspaper very often or a complete experience.
It's not like they will read the whole of everything on The Guardian's website
or even the app.
Although there is an addition to that where it's all nicely contained, but, um,
you know,
they tend to sort of come in and out often from different platforms or from
sharing with their friends and so on.
And that means that they don't necessarily get the breadth of it, of, of,
of what we publish. And, um, I think that's,
that's not to do with The Guardian that's to do with, um,
how the whole of the media has splintered.
And I do think that's a democratic challenge as well. You know,
if you find things that you don't think you're interested in and perhaps you're
not interested in, but, uh, you know, are useful to you to know. Um,
I think that can be, um, quite a specific, uh, challenge for,
for across the media.
Yep, absolutely. Um, okay. So let's go to the Q and A,
um,
we have Gordon Farah who has asked a question
here,
and I think it goes back to what we were talking about
in terms of, um,
misinformation and lack of trust in institutions.
So Gordon says many communities, certainly in Australia,
have been exposed as being unable to pull together in the face of the current
crisis, vaccine resistance, freedom protests,
that doubled as super spreader events, health misinformation,
all rooted in distrust of institutions,
turbocharged by social media. Um, so he asks,
can we pull together to solve the coming climate crisis when we
can't even agree on a common set of facts as, uh,
during this pandemic? Um,
I'm not sure if either panelist would like to have a go at answering
that.
I mean,
it's obviously a big and worrying challenge and we'll see what comes out of
COP on that. I would say though,
but one thing we do share is we can all feel the impacts
of the climate crisis. I think, um, I I'm,
there was a particular moment this
summer, the Northern hemisphere summer when, um,
British Columbia and the west of Canada was having sort of,
I think it was 45 degree heat,
and everyone knows that BC is not a particularly hot area. And it was when,
you know, Vancouver was burning and people were thinking Vancouver's burning.
You know, it's not, it's like, I think people, you know,
even people in the north are having to experience it. They assume it's,
it's going to be in the south, but it is going to hit them, you know,
these floods in Germany, you know,
and all these people dying and floods in Germany. And I, and I think,
I think surely once it starts to hit the rich in their day-to-day lives,
um, when I say the rich, I mean, I don't mean, I mean the majority,
I mean the north, um, then,
then I do wonder if it's that shared experience might drive a
collective action. Um, that's just,
I'm trying to be optimistic there for the questioner.
Seeta I don't know if you have anything to add to that.
I don't know if you're in an optimistic mood.
I guess when I'm confronted with a really big question like
this one, um, my answer tends to go towards,
so where is that change and where is that pulling together already
happening? And to use that as an example of, uh,
the possibility of the political possibilities that exist. So again,
I do find it really heartening that young people are so engaged
in climate issues. And, um,
I think contrary to some of how it's some of the ways it's been
framed in the media, that it isn't all sort of apocalyptic,
um, ways of, anticipating the future.
It's that actually, um, it's,
I wouldn't call it a pragmatism, it isn't climate pragmatism,
but it is a sort of understanding, uh, an,
a sensibility among young people that actually, yeah,
there is really no other option, but to, to have, um,
coordinated action. And so, yeah. Um, that's how I would respond.
Is it going to be easy by no means
will it happen overnight? I mean,
this is generational change that we're talking about, and this is, um,
I think Katharine mentioned, you know,
this idea that consumption and growth aren't, uh, you know, these,
um, doesn't equate with progress, right?
This kind of industrial era notion of human progress
really needs to be undone. So that's gonna take several generations. And again,
we have, um, communities, not just young people, but, you know,
there are obviously, um,
indigenous communities that have been speaking this, uh, for,
for generations, right.
To really think about how we understand humanity and
understand our relationship to the land and to, um, our planet.
I guess I'm thinking about, you know, marginalized communities.
They might take us to the next question from Chrysanthe,
um, who asks, you know,
it's spending time to gain the trust again,
trust of traditionally marginalized communities takes
resources. So, um,
how can journalists and newsrooms fight to create this time in a 24/7
news cycle? Um, we might go to you Katharine first,
and I know Seeta you're very much in touch with marginalized communities in your
projects. Um, yeah. Katharine,
if you have any thoughts on that.
Yes. I think that's just a question about where you put your resources.
You know, I think, um,
one of the positive things in media in the last few years is that the shift away
from the advertising driven model of funding to a reader driven
model of funding means that you can, you're not chasing, uh,
traffic for clicks, you're not, you know,
you're not desperately trying to get clicks. You can instead focus on, I mean,
you still need audiences, don't get me wrong.
You need big audiences and we want big audiences, but I think you can,
but you can also, you can put your resources into what your priorities are.
And I think that's just a decision that all newsrooms make, you know,
what are your priorities and if that's where your priority is,
and then that's what you target. And, um,
we've done that quite successfully in recent years.
I'll just add very quickly, um, that,
um, I'll give a example of where marginalized communities in the
United States are really. Um, I think,
um, felt, um,
some of not affinity, but sort of faith in news media.
Um, and that is around, um, issues of, uh,
facial recognition and police surveillance. Um,
not that there isn't more work to be done,
but I'm just thinking about, uh,
in Los Angeles where I've worked with a group called Stop LAPD Spying Coalition.
There's a lot to complain about in terms of, um,
the, the, the, um,
lack of coverage and the misrepresentation of both unhoused
communities and the skid row area, as well as, um, uh,
criticism of the coverage of over policing, right? There's never enough of,
uh, contextualizing and history, um,
in a lot of the media coverage, but at the same time,
it has really meant a lot to be able to, uh,
insinuate the perspective of the community on skid
row, um,
into whether it's The Guardian or The Intercept or the BBC
or other news outlets, right.
Being able to connect with communities and try
and, um,
represent and depict what the struggle is.
And that has meant a lot. And I think that, um, is again,
not necessarily the end point, right? It's a conversation,
it's a negotiation between journalists and, uh, affected communities,
how that representation unfolds and what stories are told.
But I think it's a good example of where, um,
you know, over a 10 year span of time,
this group, for example,
The Stop LAPD Spying Coalition has developed relationships with news
media in a way that, um, has been really productive. So to answer the,
the right to,
to create time for marginalized communities means actually
taking the time to have a relationship with marginalized communities
and vice versa, right.
For members of marginalized communities to think about where,
um, stories can be translated and shared with other, um,
populations and whether with other audiences, um, that can move,
you know, these stories in different ways.
Thanks Seeta. And I do see all the questions pouring pouring in now,
when we will have to wrap up in a few minutes, um,
I actually have missed a couple of questions in the chat,
and I will go to Mark's question. Um,
a question for Katharine Viner, the Guardian has quite notably taken up
specific positions regarding news phenomena, such as climate crisis.
Do you feel this is an example for other news brands and even individual
journalists as so many reporters work as freelancers to
follow,
does an explicitly subjective position instill more trust in
among audiences, like, as you said, young people,
or could it contribute to public polarization and how
does the newsroom navigate this tension?
It's a, it's an interesting question. And, um, I think, um,
I would, I would sort of,
I wouldn't characterize it as taking a subjective position.
What I'd say is that I think it was,
there was a figure now is that I think it's 99.8% of scientists now
believe that climate change was, uh, created by humans. So,
um, so I wouldn't say it's a subjective position to agree with that.
I would say that's, um, an objective truth. And so, um,
just then to say that, uh,
we think this is a danger to the future of the planet,
and we will therefore commit a vast number of our reporting resources to that.
I don't think that's, I don't think that is a, um,
crazily political position. I'd say it's completely, uh,
in line with both the Guardian's history and our, uh, journalistic ethics.
So I think it's careful not to, not to think that that's something
that it isn't, I think this is what all the media should be doing.
I don't think it's just because we're a progressive news organization.
I think it's what all the media should be doing.
I think it's the biggest story in the world. So, um, um, I would be out,
I would, um, be cautious around that. I think, um, you know,
for freelancers taking a, um,
getting strong expertise in a singular area is a really positive thing to do.
Um, uh, yeah.
Okay. And, um, we might just want to,
I'm afraid we've only got time for one or two more quick questions. Um,
Seeta, we have a question here. I might set aside the, um,
advice on applying for Guardian internships for another time,
but thank you for that question. Um, and, um,
Seeta, I'm just wondering, we have a question here, what do we need,
what do we need to do about the need for big tech now,
I guess you could interpret that in several ways. Um,
but did you want to sort of have a wrap up, um,
about what we need, what do we need to do about the need for big tech, it's here.
Yeah.
Yes, it's here and, uh, it's difficult to undo.
Um, I think that, uh, earlier I spoke to, uh,
thinking around the edges of antitrust rights,
not only that we need to break up big technology and I would
argue that, um,
there's a lot of room to think about, um,
how we develop public infrastructure, um,
with respect to computational resources. So by, by that, I mean, right,
what are sort of the guts of the computational, um,
tools that we need and use on an everyday basis?
And how can we make that less, you know,
if we're talking about virtualized web servers and Amazon, right?
Let's not just have AWS to support us,
but let's have a range of options or, um, you know,
let's think about, uh, technologies that,
um, or,
or developing technologies that encourage us to consume less, right.
Or the technologies themselves, uh,
rely on less mining and extraction of rare earth metals, for example.
Right. I think we need to, um,
very importantly think about, uh,
changes in financial regulation that would enable us to curtail the
ways that big technology companies are making money from money, right?
This process of not just mergers and acquisitions,
but the process of finance financialization that has allowed them to grow. So,
um, large and,
and doing some of those things will help it help us arrive at a
point where say, if you're going to, um, you're enrolled in a program, uh,
in data science or computer science, right?
It's going to be the case that you leave university
and you don't just have to go work for Alphabet or Amazon or,
um, other company, right. That there will be, you know,
for example,
a public agency that is actively engaged in developing
public computational infrastructure.
So those are some of the things that I think will allow us
to get beyond this.
What is an incredibly concentrated big technology
ecosystem? Um, there are other recommendations as well, but,
uh, um, uh, in general,
I'm thinking about sort of structural changes that would allow us to really
shift this culture. Um, so that, um,
both we aren't as dependent, uh, but also that we, um,
are more mindful about the kinds of technologies that we're creating and using.
Well, that sounds like a, a whole other panel there Seeta.
It's really interesting thoughts. Um,
we might just wrap up with just one last question here,
what are your thoughts on the role that fact-checkers play in cleaning up
misinformation on social media platforms?
I suppose it could be argued that, um,
as we were saying before, people in certain, you know,
polarizing spheres, aren't even believing fact checkers, as someone said to me,
what fact-checkers are checking the fact checkers. Um, so,
um, which is what some people believe. So I guess maybe we could wrap up with,
um, with that.
I mean,
I think the whole fact checking trend obviously came from a really great place.
Um, um, but there's just, there was a moment, uh, it was a couple,
it was a couple of years ago.
It was when Trump was still in power and he gave some horrendous speech about
something and it was overnight.
And I woke up and I saw that The Guardian had fact checked it and I thought, oh,
that's a good idea. And then I thought,
then I saw the Washington Post had fact checked it and the New York Times,
and the BBC had fact checked, and I realized that all around the world,
there were all these journalists checking this one speech. And I thought, well,
perhaps you just need one person,
fact checking and everyone else going out and finding out what Trump was doing
to public lands and what Trump was doing to weaken environmental protections and
what Trump was doing on tax. And, you know,
that maybe the fact checking was a bit of a diversion from finding out the stuff
that really mattered. So it's important. We need people to do it, but I think,
you know, we don't need everybody to do it.
Seeta, I don't know if you have any last thoughts on misinformation.
Um, just, you know, I, I would absolutely agree. I think, um,
we should be thinking about different kinds of institutional practices that get
us to a point where, um, you know, truth isn't a bad word.
That's a brilliant way to finish up the panel. Um,
We will have to, we thank you.
We see all your questions coming in and, um, but we will have to wrap it up now.
Thank you for all those questions. They've been terrific. Um,
so I just wanted to say, um, you know,
I want to thank the panelists for addressing this big picture discussion on how
media can build a better world post COVID and we've addressed, you know,
lots of current and looming crises and across domains,
including climate, technological and media challenges and all in just 45
minutes. So I want to thank you both for coming.
And just to wrap up and saying the aim of this UTS media salon series
is to support global conversations between media professionals, academics,
writers, policymakers, thinkers, and others on global media issues,
uh, on topics that are both broad and big thinking, or sometimes, uh,
some panel topics will be more niche,
so it will be continuing into next year. And, um,
we hope to see you then and thank you to the panelists and all of you for
joining us today. And please join us again next year.
Thank you to Katharine and Seeta.
In her recent essay on Covid’s enduring impact, Viner pointed out throughout recent history after events of great distress, has emerged great change. But she warns, ‘Positive changes do not automatically emerge from periods of crisis; you have to fight for them.’ What changes need to happen?
Gangadharan has questioned digital systems, surveillance, privacy and data profiling, including if we should expect trace and track apps to become a regular part of our lives. Her Ted talk explored why being a good digital citizen means rejecting technological systems that mistreat us.
These two leading thinkers discuss the big picture challenges journalists and other institutions face during and after a pandemic that, as Viner puts it, ‘has given us fresh ideas and new energy’ for tackling looming crises, which are similarly ‘universal, yet affecting different people in different ways’, including climate, inequality, trust in media and institutions, technology, truth-seeking and polarisation in information ecosystems.
The event was held at 7:00pm AEDT / 8:00am GMT on November 3rd 2021.
Moderated by digital journalism lecturer and international journalism collaborations coordinator, Christine Kearney.
'Imagination, AI and News' with Mark Deuze and Anna Vissens
This panel is inspired by Prof Mark Deuze’s recent co-authored essay ‘Imagination, Algorithms and News: Developing AI Literacy for Journalism’.
Thank you for joining us. To the UTS Media Salon: the Journalist and the Scholar, on the topic Imagination, AI and News.
My name is Christine Kearney and I'm a journalist and lecturer in digital journalism here at the University of Technology, Sydney,
in the School of Communication in the Faculty of Arts and Social Sciences.
The aim of the salon is to bring together and spark global conversations between media professionals and academics, writers,
policymakers, students and others on global media issues, on topics both broad and big picture or sometimes more niche.
And we are delighted today to have speaking with us Professor Mark Deuze, and lead data scientist at The Guardian Anna Vissens.
But before I introduce them further, I would like to acknowledge the Gadigal people of the Eora nation, upon
whose ancestral lands our city campus is situated. We pay respects to elders past, present and emerging.
They are the traditional custodians of knowledge for these ancestral lands.
This session is being recorded for teaching and learning purposes and will take approximately about 45 minutes.
We will have a discussion for 30 minutes and then take questions from our audience for about 15 minutes.
So to the audience, please use the Q&A function, not the chat function,
but the Q&A box to ask your questions. We'll do our best to answer as many questions as possible, and you may pose the question at any time.
And if you're a social media person, whether you are still on Twitter or perhaps you are using another platform, feel free
to use #utsmediasalon. Now I'm most excited
to properly introduce our journalist and scholar for today.
Mark Deuze is a professor of media studies at the University of Amsterdam's Faculty of Humanities.
Mark has held honorary appointments at the Faculty of Journalism at Lomonosov Moscow State University, Russia, the School of Communication
here, at UTS in Sydney and the Department of Communication and Media Studies of Northumbria University in the UK.
Publications of his work include over 100 papers in academic journals and 12 books.
Before that, he worked as a journalist and academic in the United States, Germany and South Africa.
He's also the bass player and a singer of Skin Flower, which I think I've got that right, Mark, which is a punk band.
And Mark, I believe you have successfully touched down just a couple of hours ago in Perth.
I know you're making a big tour of Australia, so welcome and I hope you've been able to unpack.
Just about, thanks, Christine. To Anna.
Anna Vissens is a physicist by trade, but has spent nearly all her career
in media, first as a journalist and then as a data scientist.
Anna leads a team of data scientists at the UK based Guardian, News and Media.
Previously, she worked at the BBC, where in 2007 she received an award for Best Producer in recognition of success in building
new audiences and developing an effective social media strategy. In 2015 Anna decided to change her career and become a data scientist.
Anna’s prime interest is natural language processing. But she also worked on audience segmentation and propensity modeling.
Anna I know it's early in the morning from where you are joining from in London.
So a special thank you for you for joining us at this early hour. How is London this morning?
Oh, very cold. But I'm fine, thank you.
I've had my coffee, so I should be fine. Okay. Well, it's always good to get going on a cold London morning with some coffee.
So let's get going with this panel. The panel was inspired actually by Mark Deuze’s recent coauthored essay
‘Imagination Algorithms and Use: Developing A.I. literacy for journalism.’
It highlights the importance of lifting artificial intelligence literacy in journalism, acknowledges the long history of journalism
and technology interdependence, and calls for shifting the perspective from reacting to the inevitability of AI to creating imaginative
approaches with the help of AI that serves the human journalist, and public aims of journalism.
This panel is actually just one of many discussions taking place around the globe, giving an introduction to a discussion of journalism and AI,
Notably also by, I have to say, LSE’s Charlie Beckett, who Mark coauthored the paper with and its Journalism AI Initiative.
So let's start with something the two of you have in common, how you got to be here today sharing thoughts on tonight's topic,
Imagination, AI and News. Mark, you've written about all sorts of media issues
in your lengthy 25 year career in academia. And Anna even though, as your bio says, you are a physicist by trade,
you were a journalist before taking a leap into the world of data science several years ago.
So jumping into or thinking about this space, And Mark we might start with you and then go to Anna.
Well, first of all, thanks so much for organizing this and for for having me, Christine.
Having me back, I would almost say. And look, for me,
the issue with AI and journalism is part of a of a much broader
theme that I've been confronted with both as a journalist,
and this is Grampa speaking, I remember when I studied to be a journalist, we had typewriters,
only in the second or third year of our program we got computers, if I remember correctly, way back in the eighties.
But the notion of a digital transformation, broadly conceived. But beyond that, the way
that in the history of journalism, the profession, the industry has responded to technological changes.
And you see sort of a recurring pattern. Right. Is that the technology is often seen as something that happens to journalism
from the outside, like a comet hurtling towards earth that you now have to respond to.
Which leads to some people saying, I don't want to know about it. Other people saying this is the best thing ever and a huge block in the middle.
It's like, I don't know. And and then very fairly quickly responds, Oh,
the technology can now do the the banal, the mundane stuff. And we can focus on what makes journalism great,
you know, the real investigations the beautiful storytelling and those kind of things.
And that was the response to, you know, the rise of radio as a mass
medium in the 1930s, there was a response to the rise of commercial television in the sixties and seventies. Sorry, the eighties,
and then later on in the nineties, the rise of the, the introduction of the World Wide Web. So you get to the same discussions repeating
and you see those same discussions now in the in the realm of AI as well. So
on the one hand, an important discussion and the other hand, I wonder if we are consistently missing all kinds of really imaginative and creative opportunities
and the kind of opportunities that Anna is doing in her work. Like really using this space
for a different type, different ways of doing journalism rather than fretting about what we lose or what
we gain. So that's my turn.
So I thank you very much. Thank you for having me as well. So it's yeah, it's interesting.
You know, when before I actually went to, you know, to study
at the uni, I had a gap year and I worked
in one like tech institutions or something like this. And we had punch cards right. So if, if anyone remembers how they looked,
so if you and I, you know, I don't consider myself to be too old, but
it just shows how everything can change, you know, very, very quickly.
And it's just amazing how, you know, even the media.
Right. And what used to be like a very, very classic like The Guardian.
Right. Just the newspaper, but my my experience working in the newsroom there is just like one of the most
advanced digital kind of you know, publishers. So we we actually we move very quickly and we learn, you
know, how to adapt, etc., you know, very, very quickly. But it might be quite frustrating. Right.
And so changes sometimes happen quicker than we are able to digest.
So my personal journey was very I don't think that's not to say, you know, I always was nostalgic about science.
I, you know, that wasn't my
you know, it wasn't my decision. It was mostly kind of, you know, was forced to do something else, but not science.
When they when they you know, when I met this opportunity, when I started doing some analytics work
like product management and, you know, project management, etc., that was, you know, you've well with the BBC,
I just realized that we have so much data but you know, all these data we,
we basically, you know, we just we, we didn't use at that time. So it was just sitting there in a very bad shape and form, etc.
And I thought that, well, we have so many opportunities there and we really we really need to transform the newsroom
to be data informed, at least, you know. Yeah, I'm not even talking about AI, but just kind of, you know,
looking at the data and trying to understand the content that you create, your audiences, you know, why all this data, etc..
That was a huge opportunity actually. And I have to say that we are doing much, much better,
but there are so many small newsrooms around the world,
you know, which just struggle even even to collect this data right. So when when you talk about AI and you know, all these opportunities,
you know, sometimes it's just yeah, it's very far fetched for many, many journalists around the world.
So we are very fortunate to have means and resources to do something about it.
But there is a huge divide, I think here as well. So that's that's my
my journey. You're so right. Anna because, of course, in newsrooms there is often not enough time
to even with the deadlines, it's so, so much pressure that we don't have enough time to even think about it,
which is of course, the purpose of today, these panels such as today. But I'm going to be a bit wonky here and read a little bit out of your paper, Mark,
and I'm going to quote from it now in the paper, you identified a huge and you've been talking
about this already, but a huge AI knowledge deficit within the news industry, both in terms of general understanding and specialist expertize.
Although this deficit is being addressed, we signal a danger in that it is not changing fast enough,
Anna you were just talking a little bit about that, to reduce the risk of falling behind, exacerbating digital inequalities
and increasing the real danger of journalism being captured by technology and the tech sector,
rather than recognizing its history as interdependent with a range of technologies, including data, algorithms
and computational thinking and being able to creatively and ethically use machines to be better at delivering upon its public promise.
Journalism’s public promise. So I guess sticking to the bigger picture here,
what do you both think is the danger for journalism and newsrooms not moving fast enough, adapting to tech changes?
Of course, I'm also thinking about, you know, with the rise of the Internet
and the slow pace that newsrooms had adapting to online.
Yeah. What lessons can we learn from that? And I guess, you know, Mark, I'll start with you
since I've quoted from the paper. Yeah, look,
a journalist who's listening to this or is reading something like that or a journalism student or journalists kind of would be forgiven
if they if what they hear in these kind of words is. Well, does that mean that I now have to learn statistics or that
now every journalist has to be good at programming and all of that?
And and that is certainly not what but what Charlie and I are suggesting here.
In fact, you know, you were kindly referencing the fact that I've now been in academia
for 25 years, which is rather depressing and also very exciting.
But but if there's one thing I've learned from studying journalism and technological slash digital transformation in all that time,
both in what happened, what has happened in that time, as well as looking back in the history of journalism, is that technological
technological change in journalism is a cultural phenomenon. So it's not about journalists becoming fantastic
at some stuff with computers or software or
SPSS or R or Excel, God forbid. But it is about developing
a discourse around this and appreciating the different ways
in which this new technological thing that's happening now. Now we're talking about AI,
couple of years ago we talked about data a couple of years before. Right. There's always something.
But just to appreciate that this is a cultural thing. First and foremost, and that it is about
and it reproduces certain expectations that we attribute to technology.
Right. Technology is often assumed to be better or more perfect than humans.
Right. And everything we know about algorithms and AI and everything suggests
that they are at least as flawed as we are at doing stuff right.
They reproduce the very inequalities and nonsense and biases that all of us have.
And I find that comforting almost. Yeah, of course we have to be critical and those are problems to be solved.
But it's comforting to know that ah, again, we are reminded that this new technology, this newfangled thing, this device, this
whatever is not going to replace journalism is not going to destroy it. It's not going to make it better.
It's just a thing that we are in relation with inevitably because we are journalists.
So I think that for me is the real key behind a statement like that.
And beyond that is to also prevent
knee jerk responses to technological change like, you know, this is for the better or this is for the worse, or
this will free us up to do the things that we, you know, are supposed to be doing is like, no, you still have to do mundane everyday things as a reporter,
even if you outsource some text writing to AI programs like, you know, and we as professors, Christine,
we are now faced with in the future, we can't really assign students to write essays anymore because with open AI chats
and text writing applications that they can submit perfect essays.
So we have to come up with other ways of, of interacting and assessing our students. And, and, you know, that's fun.
And then I hope journalists see this kind of stuff like that too.
Thanks for reminding us of that, Mark. It's a great future. Anna what do you think?
I mean, are journalists, are newsroom's adapting fast enough.
You know. It's I think that all you know, all these essays will be pretty similar.
So you you will be able to spot this pretty quickly. And we also we we probably need some sort
of another algorithm detecting this stuff for the university. So no, but it's yeah, it's hard
because we are not big tech companies. And honestly, you know, AI is expensive.
It's very tricky. It needs a lot of investment. And it's not only about money, it's
investing in skills and it's also investing in your overall
kind of, you know, big, big strategy because it's not only about hiring a couple of data scientists.
You know, it's not about this. If you really want to create some, you know, proper data solutions
for the newsroom, you need a lot of, you know, more than just this right. You need data engineers.
You know, you need proper tech support. But the most important thing for the newsroom
and I think that this is where we really need to change this culture and kind of, you know, educate our newsrooms a little bit more about A.I.
and, you know, what's possible, what's not possible, because honestly, we have all this hype around that generation, you know,
3, 4, 10, 15, whatever comes next. I also agree with Mark.
It's not going to replace journalists. And we can have the whole discussion about, you know,
how these models work and why not etc.. But we'll be
we are really kind of nervous and probably will never do something 100% automated.
You know, whatever we build for our newsroom, we always try to have some sort of humans in the loop
and to have these humans in the loop, we need humans, you know, journalists really understanding what's going on.
Right. So and I'm not again, you know, I'm not talking about statistics and you know, how this algorithm works, etc..
But it's just this very, very deep relationship between people who know exactly,
you know, their domain, you know, domain journalism and audiences and that content
and actually ask data scientists who try to help them, you know, making things easier in the newsroom.
But we always need this very close collaboration, right, between between data science and journalism.
And if we don't have this understanding right between us, so it's not going to work. So that's that's why it's really cultural change.
Indeed. So I absolutely, absolutely agree with Mark.
Well, let's break down AI a little bit now and how it's working in newsrooms.
But before we do that, I just want to remind everyone, you can put any questions into the Q&A box,
which you'll find down the bottom at any time. I'll ask a couple more questions and then we'll open it up
and please feel free to ask away.
So, yeah, let's get to this buzzword of the recent year, AI, it's been broadly used in media
as an umbrella term for a range of technologies such as automated statistical data analysis, machine learning in natural language processing.
But we were just talking about, both of you were also arguing in some respects, maybe it shouldn't be such a buzz word
or you know, I know that you mentioned Anna that, you know, you think we're living a bit around
a bit of hype at the moment with this word. And there are some limitations for now around what we can do.
Can you talk about those about advances and limitations? And how is it actually working in newsrooms at the moment?
You did a project last year called it was the Extracting Quotes project you did at The Guardian.
What lessons did you take from that and what's happening now in newsrooms?
Yeah, absolutely. Yeah. That's, you know, AI
I honestly think that sometimes it doesn't help you, you know, with these cultural changes that we are talking about
using this term because it's very vague.
And quite often what people think about when you when you say, AI, it's just like something very perfect
working machine that can make decisions, you know, for you, etc., etc.. This is not what we are doing in the newsroom, to be honest.
You know, we just we are doing a little bit you know, we are doing a lot of statistics
with a little bit of machine learning, you know, sometimes a little bit of deep learning, but it's mostly about using
advanced techniques in natural language processing, for example, etc.. So this kind of stuff and as I mentioned already,
we are not you know, I kind of at this point, I don't think that we are there yet
in terms of making all these decisions automatically by an algorithm, right.
So especially when we are talking about user facing,
you know, products and the obviously we really want, you know, humans to check.
That's what the algorithm is doing is, you know, is is okay. And actually going back to the point I thought that it was a very,
very interesting example from from my personal experience about the perception of the AI
and some sometimes what I see quite, quite often is that people expect AI to be 100% accurate.
Right. So it's, you know, this argument that, you know, humans also make mistakes and the machine can make mistakes.
You know, it's just about balancing the risks. Right. And what is acceptable, you know,
what kind of threshold of the you know, this error is acceptable.
So remember that at the BBC we had this famous project called 5050.
We tried to look really, you know, kind of a diversity of representation
of in our content, who we talk to, you know, who we interview on air, etc..
So we didn't have at that time any automation in this space.
So what we asked the journalists is to do very boring, mundane, manual task, basically inputting into Excel spreadsheets,
every single person they interviewed or every single person which appears even online in our online output.
And when we were doing this in the news room at the same time, I was building some automation, right?
So I had, I, you know, wrote an algorithm and which was able to predict gender
for example, we were looking at gender. And I was blessed because for three months I was running my script
and at the same time people manually were, you know, like inputting this data. So I basically had this ground truth, you know,
data set manually curated by journalists. And when I started looking at you know,
where humans didn't agree with my output, with the machine kind of output,
there were a lot of mistakes in these spreadsheets, right?
So so it's not like, you know, it's and I remember that this realization when I went to journalist, right.
And we had another conversation and I and I showed them, well, you know, this is you know, this is not right, you know,
and you miss this name and you miss that name, etc.. It's so it's this reality.
And I remember that at this point, the acception right in the newsroom was much, much wider.
So because, you know, we really, you know, people on the ground, you know, they just understood, you know, what what it takes
actually, what it takes from them. What it takes from the machine. Right. To do it right.
So these kind of, you know, these kind of experiments are super, super important because we,
it's it's also about thinking, you know, about risks, for example. Right? And impact.
I read a very interesting paper from Nordic AI and I really like very simplistic way
of looking at all, you know, like the AI projects that we, we might work on,
you know, with risk on one axis and the impact on the other. So you always have to think about where your project sits, right?
So is it high risk? Is it low risk? So if it's if it's low risk, so what kind of impact
and what you have to think about in terms of, you know, I don't know, ethics and everything else at this point.
And again, we need journalists to work with us on this. At this point.
Anna, because you've raised ethics and of course, you were just talking about experimentation as well. But
I can see I was about to ask a question on ethics. And I can also see one of our attendees has asked similar questions.
So I'll just read it out and yeah, let's just combine them.
So an attendee asks what sort of ethical concerns should we have around the rise of AI in newsrooms, considering broader
social concerns around fake news or disinformation? And how can we stop AI impacting the ethics of news?
Yes, because we know there are issues of erosion of privacy or problems with consent. I know you both have some things to say on this, and we can start with,
well either one of you.
I can talk a little bit about, you know, how I see this.
It's interesting. We you know, The Guardian, we don't collect any demographics data, so we don't use demographics data in any of our models.
And that was a, you know, kind of informed decision that we decided to make.
At some point we might review this, you know, going going forward. But all this issues around consent
and we know what kind of data we can we can use in our models. And we cannot use, obviously, you know, we we take these very, very seriously.
But if you think about, you know what, you know what else we have to think in the newsroom.
And we as a society, I just don't think that we are there yet in terms of proper regulation of this place.
It's very new. We are just kind of, you know, testing the waters. And I don't think that
the we have a proper framework for testing,
you know, all these models which are, you know, out there around
the and scrutinizing, you know, them properly. So what I really like to see and we have already some you know so
journalists are doing their bit in this space because we have some very interesting initiatives when journalists scrutinize,
you know, algorithms which especially when they impact people directly.
But even to do this right, we have still you know, it's it's a it's a new field for journalism.
Right. So we again, we need new skills. We need this really deep understanding of how it works.
And I want to see this more and more because indeed the risks are very,
very high. You know, fake news and disinformation. We know that we have to be free or whatever.
You can create multiple, multiple copies of you know, fake news kind of content and disseminate it very, very effectively.
This is basically what happened in 2016. Right. So we have to we have to
have an answer and we have to, you know, do our bit. Mark, anything to add to this?
Yeah, I totally agree. Of course Anna, and I also want to recognize the work that you and your team
and so many journalists and so many news organizations are already doing right. We're not doing this panel to say journalists needs to do this thing
because nobody's doing it. And that's certainly not not what this is about. In fact, I don't think there's any news organization in the world that isn't
one way or another invested in this space and trying to come up with solutions
and everything is it's just that I mean, from my perspective, privileged perspective as an academic, I would like to see
that the initiatives undertaken by news organizations start from a sort of creative and an imaginative space rather than a space of fear
or having to respond or or those kind of things. And and to the ethical part,
I mean, let's maybe reframe this debate. At least I'm going to take that chance a little bit to say like, so what could a news organization do ethically with AI?
Well, one thing AI is really good at is content analysis. So you can run the AI through your the content of your own news,
like your entire archive of everything you've ever done to see where the gaps in your own coverage are.
Like, what do you overcovered and what do your undercover? What kind of voices do you allow in
and which one do you systematically ignore? When do you interview women and what do they get to say
in your TV show, in your newspaper? I mean, that is the kind of simple stuff that AI can do really, really well
and fairly quickly. And and that is the beauty of AI. Right? I mean, you run the analysis, you don't have to hire
an expensive academic and pay their students to do this for you. And you can
quickly identify gaps in your own coverage and address them like. So it's another example.
There is much debate about algorithmic personalization, right? Is that by knowing user data, you can really offer them a customized
experience with your product or service, including your news offerings.
But that's where there's real problems, right? It assumes that the data, we gather from people's clicks and time spent
and so on, actually says something about what they need or what they want, and there's no real correlation between these two things.
I mean, the older the qualitative, the anthropological, the ethnographic research out there suggests
that clicks don't for people are not the same as quality. So so so to be incredibly careful about using algorithmic
personalization in in in news and so so
there's a couple of things that we can do with AI that would instantly in my, in my opinion or potentially improve
what journalists are already doing and the hard work that journalists are already are doing. And then when it comes to disinformation and fake news and so on, that
it's a different kind of discussion. But it's an interesting one. The role that A.I. plays in that, both in amplifying,
for lack of a better term, shit, as well as offering opportunities.
For example, an example would be modularjournalism.com right this an AI service that that breaks up journalistic stories and then inserts them
tactically into ongoing discussions on different online platforms.
Right. And that is an interesting use. Rather than employing 50 content managers in your news organization
that have to figure out what part of this should be on Tik-Tok, what should be on Twitter, what should be on Mastodon,
right? And how and copy paste? You can program an AI to do that for you
and that service is already in the market. So there are some really interesting applications of AI that actually
allow journalists to be more ethical in what they're already are doing.
Yeah, actually, I agree. Maybe I can add a little bit of skepticism.
But not. About limitations. So what kind of limitations we might have
if it was a very interesting experience actually for me twice, you know, once at the BBC and then, you know, in The Guardian when we
when we tried to segment our content from, you know, a different perspective.
So basically we had this idea of users news needs probably, you know, people know about this
segmentation like update me, oh, I don't know, divert me, etc.. So we had the, the pioneered this segmentation of the BBC
for World Service first and then it moved to BBC News. And when I was working with the BBC News,
we decided, okay, well let's actually build a model which can predict, you know, the segment for each piece of content.
But before, before that we thought, okay, we have to before, you know, start building, you know, the training dataset because it's a hugely,
you know, tricky problem and time consuming. It's a manual task, etc.. So
my colleague and I, we spent a couple of days annotating the same the same list of articles.
You know, we were sitting in two different news rooms, not talking to each other.
And then we decided to see, you know, how many times we actually label the same piece of content in the same way, and we barely reached 30%.
So only 30%, you know, cases we we actually labeled the piece of content in the same way.
So there are some very subjective concepts that when you try to, you know,
automate them or, you know, like making something,
you know, beyond kind of, you know, this the manual and mundane task, they just don't work.
And we had another go with The Guardian when, you know, we were thinking about contextual kind of advertising, etc..
And again, it didn't work. So we just ran a couple of experiments, you know, we ran tests and as soon as you even,
you know, you add more and more data, for example, in your dataset. So the agreement goes down, you know, very, very quickly.
So that's I mean, if some you have to you have to be very careful.
You know it was kind of framing your problem because, you know, asking the right question.
It's probably one of the main building blocks in building AI.
And again, you know, back to this, you know, cultural change and why we need journalists to understand AI,
because who else is going to ask these questions? Right. So so we you know, we have to ask right questions
to be able to build these data solutions for for the newsroom and this is where it plays, you know, together.
Very, very importantly, I can see a question actually in
in the chat. I don't know if we want to address and see. Well, we've actually got yeah, we've got a few questions.
I might. Which one are you thinking about. Because I've got to that I was going to combine.
Whatever you think. So you know, one one question is about how does AI technologies shape
the future of news, the news industry and media industries? And there's another question that's come in also,
which is really going the paper as well about how we move from reacting to the inevitable AI to creating
imaginative approaches with AI that serves journalists and public journalism.
You know, you've already brought up some examples in the last
10 minutes or so. But, you know, is there anything you want to add in terms of what imaginative examples to take inspiration from?
Is it an ongoing it's obviously an ongoing conversation or how do you see it shaping the future
of news? Mark, maybe?
And we will have to wrap up soon. So it may be the last question. We might get one more in.
Well beyond mentioning a bunch of things. I mean. I do want to acknowledge
the work that my coauthor, Charlie Beckett, is doing with the Journalism AI Initiative at the London School of Economics.
And if you go to their Web site, you'll find a lot of reports and including a bunch of examples
from around the world of news organizations, either developing their own or partnering with tech companies or with startups
doing like really fun, innovative projects or developing specific services.
And look, I mean, AI can be on the front end as something that we can see news
organization doing, but can also be something that works in the background. Right. But that that, for example,
there are numerous smaller companies around the world that do because AI needs to be trained.
Right. And this is a huge industry, the software as a service industry. And that industry generally doesn't really care about journalism
and is notoriously poor in in using up to date data. Right.
They use old massive data sets to train AI applications. So there's companies now moving into that space
that are actually specialized in journalism. Right. That that and so that's interesting.
So those are I find really interesting applications. And the reason why I mentioned this, it speaks to what Anna is also saying,
is that thinking critically and creatively, imaginatively, imaginatively about AI also includes the
the realization that every single aspect of journalism is touched by AI, not just the content, not just the production,
not just the visualization, but also what happens at the back. And then also how you get an idea.
Like if you get an idea for a journalistic story by scrolling on Mastodon on Twitter, you're already are interacting with AI.
Right? Right. So and to think creatively about how that impacts you in your work
and to then start playing with it, I mean, that is that's next level.
But I think that's really the kind of the conversation that we should be having,
rather than a conversation, will AI make journalism obsolete or something like that?
Yeah, absolutely. I agree. And I think that we you know, we indeed have so many use cases for for this already.
But I still think that if we look at the future,
so there will be more and more systems that just help newsroom
be more creative and spend less time on these mundane kind of you know very boring tasks but actually
spending more time on creativity and finding new sources of information.
And obviously, you know, the investigative journalism already is benefiting a lot from some sort of automation
because, you know, all of this extraction of invented is or whatever, you know, helps us at least, you know, to flag,
you know, some some stuff very quickly for for journalists. So, yeah, it's, you know, a huge potential there.
But 'm less kind of optimistic or, you know, more skeptical I would say about all this, you know,
I don't know, user facing products which really serve the purpose or for explaining people
what's going on around them. I still think that we need journalists to do their work properly.
So there will be, you know, a long time before we we actually get the if if ever, to be honest, if ever.
But yeah, that's that's an amazing space to be in right now. I would say that that's for sure.
Sure. Okay. We'll just wrap up now. I'm just going to read out to the three questions that have been hanging here.
And I might just leave it to the panelists to choose which one you want to answer.
We have where have best practice examples happened
which news outlets or journalists or newsrooms are integrating AI well, Anna we know that what you think about that obvious.
No, but another question is what is the impact of AI in countries where press freedom is extremely limited?
Russia, for example? And the last question that's hanging is what would be some of the most in-demand
skills for journalists in the future in terms of AI
might possibly take the place of doing the basic work like article conduction.
So look, I leave it to you two to talk about whatever of those three that you might, might want to.
Maybe very quickly the last one, because I think that, you know, I spent, you know, many years in different newsrooms.
Right. And I remember that when I don't know, ten years ago or even just ten years ago, I had two journalists in the newsroom
who didn't even want to talk about numbers. So as soon as you start talking about numbers, I was just like, no, no, no.
You know, I'm a journalist. You know, I don't know the numbers. You know, just now we have journalists who actually can code in Python.
And I mean, quite honestly, it's just and it's not like, you know, they went to some, you know, tech
and studied tech with the tech subject or whatever. They just told themselves, you know, how to make
you know how to use all these tools and the and they really realize that it helps them in their everyday life.
And we have more and more people like this in our newsroom and I really think that like, you know, honestly, in basic statistics,
I mean, the basic statistics, you know, we are all talking about Bayesian statistics or whatever, but just understanding, you know,
what is the difference between, you know, this and that and, you know, what's statistically significant means, if you look just just basics
already will give you a huge uplift in terms of just understanding what's possible and how these systems work.
So that's my advice. If you if you want to advance in this space.
Then Mark any anything you want to a final say on any of those questions?
Yeah, I would like to very briefly address the question about, you know, what about countries where there is limited or no press freedom like Russia, for example?
And look, I mean, one thing we've seen since the start of the Russian, of the war in Ukraine,
or at least the Russian invasion of Ukraine the war has been going on for quite a long time before that. But is is, you know,
the incredibly high numbers of people in Russia installing VPNs and going online, that way of finding like we also see happening in China.
Right. And VPNs are also AI applications in one way or another.
And the way they reroute traffic and so on and so forth and use the data that so your data still gets caught, get captured.
But now not by all the sites that you're visiting, but by the VPN provider. And that data is then also put to use
for rerouting traffic, for example. So that's an interesting example. And at the same time, the fact that if you go online in Russia right now,
you use TikTok, you see a completely different world than if you use TikTok in Ukraine or in Latvia or in Poland.
And that again reminds us that AI is clearly not neutral. Right. It can be deployed along like
physical geographic boundaries, apparently. And and that is a great moment for digital literacy.
It's like, oh yeah, we all use the same servers just but apparently it matters from where we do this.
And that is like also a real that hopefully contributes to a broader insight that
yeah nothing of the information that we find online nothing of the products and services we use is neutral, right?
Not in terms of political bias, but algorithmic bias. And that's a great moment for literacy.
And I think it's a topic that journalists should and are covering and should cover and are covering.
But but it's like Charlie and, I say in the paper, like one of the ways in which AI matters to journalism is, is yeah.
Okay, exactly what Anna says to be literate about numbers using numbers or reading numbers is certainly an issue
but also simply AI is a topic to appreciate that whether you cover elections and voting,
you're covering AI, you're covering the fashion industry, you're covering AI.
I mean, the music, I mean, everything AI these days touches. And that in itself is also a really interesting
perspective on this that we haven't covered yet. But that begs the question about Russia for me, highlights that.
What it it's a great way to wrap up, we are over time so hopefully
yes I'm not sure if we've lost now we're still going. I just want to say we are still going.
Thank you to our panelists who have been very generous with their time, including Anna,
of course, waking up in London and Mark from just landing in Perth.
And, you know, I really hope that we've had a good discussion. I think, you know, it's part of many global discussions that are happening at the moment with journalism
and AI, including the Journalism AI initiative that we mentioned a couple of times.
So again, we'll be back next year with the panel, with the salon and
hopefully continue bridging discussions between academia and industry and
media in general. So thanks, everyone, and we'll see you next time.
Thanks. Thanks Anna, Thanks Christine. Thank you. Bye bye. Bye.
This discussion highlights the importance of lifting artificial intelligence literacy in journalism, acknowledges the long history of journalism and technology interdependence, and calls for shifting the perspective from reacting to the inevitability of AI, to creating imaginative approaches with the help of AI that serves the human journalist and public aims of journalism.
Watch the discussion with Mark Deuze (Media Studies, University of Amsterdam) and Anna Vissens (Lead Data Scientist at The Guardian). Moderated by Christine Kearney, (Journalist, UTS lecturer, and international journalism collaborations coordinator).
This event was held via Zoom on December 5, 2022, at 7pm AEDT (Sydney, Melbourne, Canberra) / 4pm AWST (Perth) / 8am GMT (London).