We need a better definition for 'misinformation'. Here's why
Fighting misinformation is about identifying narratives which are causing harm to people, democracy and the planet - and offering alternative narratives
Buzzwords like ‘misinformation’ and ‘disinformation’ have become anchors we use (and sometimes misuse) to describe a world and an online public square that seems to be spiralling out of control.
These terms are being used to market services (guilty!) — sometimes of dubious quality (not guilty, I hope!) — and as a catch all term for everything from lies, to opinions we don’t like, to people being mean about us on the internet.
My view, for what it’s worth, is that when we talk about ‘misinformation’ we’re often talking about something that sits in the middle of multiple challenges:
the damage caused by social media algorithms designed for profit and addiction not safety and good information
political parties moving away from the ‘centre ground’ and towards the extremes
the increasing speed of information sharing through social media and instant messaging
centralised organisations or individuals spreading untrue information on purpose for financial or political gain
nation states spreading damaging or divisive information within other hostile countries
publics looking for increasingly radical policy solutions at all ends of the political spectrum
accumulation of corporate power looking to damage rival businesses
decreasing public trust in institutions, partly caused by increasing inequality
stuff which isn’t true (often we focus too much time on this and not enough time on all of the above)
I find it easier to say ‘misinformation’. But while it’s easier to say, it doesn’t make it easier to fix.
Jargon is making practitioners feel ignorant — they aren’t
Often the result of using the terminology of ‘misinformation’ is that it leaves people ‘new’ to the subject feeling as though they don’t have the understanding required to tackle it.
I once heard someone trying to sell services in this area bizarrely say that “misinformation is the new social media”. I still have no idea what this means. I fear it means that the hype surrounding ‘misinformation’ is an opportunity for cynics to make money from it, like social media was in its infancy for the PR industry. But one thing is for sure: misinformation isn’t the new anything. In fact, it’s as old as information itself.
Quite simply, since humans have been able to tell stories they’ve been able to bend the truth for personal gain. And every time we make it easier for humans to share information, we make it easier for them to spread untrue information. Whether that’s the Guttenberg printing press or Facebook.
Don’t let anyone tell you that ‘misinformation’ is new and beyond your understanding. If you’re a communications professional, you’re probably 80% of the way there already.
Some simple definitions it’s worth knowing (and then discarding)
Misinformation: false or misleading information spread without the intent to cause harm
Disinformation: false or misleading information spread with the intent to cause harm
Malinformation: accurate information spread with the intent to cause harm
Conspiracy theories: stories about the world which claim there is a shadowy cabal of elite people pulling the strings from behind the scenes
Culture wars: “things which people scream at each other about on the internet” (paraphrased from: Jon Ronson)
Cricket fans robbing banks: a more nuanced way to think about the definition of misinformation
In reality, the simple definitions above don’t quite cut it. Let me give you an imaginary example.
Let’s say you bought the same newspaper every day for a year. During that year, there are 12 front page stories about different cricket fans getting caught planning and carrying out elaborate bank robberies. People who don’t like cricket are also carrying out bank robberies, and there are cricket fans who are trying to stop the bank robberies, but the paper doesn’t report on these.
At the end of the year, you have the impression that cricket fans are disproportionately likely to carry out bank robberies. The statistics however show that this isn’t the case. In fact, cricket fans are less likely than others to carry out bank robberies.
Is this misinformation? Or disinformation? Or malinformation? The stories themselves were true, but it creates an untrue impression — bolstered by political commentary in more extreme news outlets attacking cricket fans for being nothing but criminals. As a result of these headlines, there was a decline in the number of cricket fans getting offered jobs in banks — so they caused real-world harm. Would we say this is not an issue because it’s not technically misinformation?
Clearly, we need a more practical way to define misinformation.
Adversarial narratives: information which causes harm
In 2019, Global Disinformation Index published a report entitled ‘Adversarial Narratives: A new model for disinformation’. The purpose of the report was to help people moderating public discussion in online spaces (e.g. social media comment sections) to capture the nuances of these definitions.
Their argument, which I agree with, is essentially this: we shouldn’t just focus on what information is false or misleading (e.g. exaggerated statistics about cricket fans robbing banks in 2022), we should focus on what narratives cause harm to people (e.g. ‘all cricket fans are bank robbers’).
Their definition:
“Anywhere someone intentionally peddles a misleading narrative, often implicit and constructed using a mix of cherry picked elements of fact combined with fabrications, that is adversarial in nature against an at-risk group or institution, and most importantly, creates a risk of harm, they are engaging in disinformation”
Here’s why I prefer this definition as a catch-all for this challenge to the ones I walked you through earlier:
It takes us away from being the arbiters of truth and forces us to focus instead on the real-world impacts of information
It acknowledges the moral component at the heart of fighting misinformation. Fundamentally we have to make a moral decision about what we define as ‘harm’ — e.g. we can extend this definition to include things like ‘causing harm to climate action’
It condenses the overlapping definitions we explored above into one simple term, which makes sense as these different categories can exist simultaneously. Often a harmful conspiracy theory can be deliberately propagated by a mixture of true information (malinformation) and untrue information (disinformation), which then stokes online culture wars and is then spread by well meaning users without intent to cause harm (misinformation)
Usefully, GDI also developed a new matrix for us to categorise adversarial narratives, which I have expanded on in the diagram below. It shows us how different groups, nations, and individuals can engage in Adversarial Narratives. In my trainings I explore some differing approaches for tackling these different quadrants on the matrix.
Category one, top left: Centralised organisations spreading narratives for political gain (e.g. Russian government interference in the 2016 US election)
Category two, top right: Centralised organisations spreading narratives for financial gain (e.g. disinformation for hire companies)
Category three, bottom left: Decentralised individuals spreading narratives for political gain (e.g. misogynistic trolls abusing female politicians on social media)
Category four, bottom right: Decentralised individuals spreading narratives for financial gain (e.g. crypto trading scammers on instagram)
It’s worth noting that the distinction between political and financial gain is not always clear cut — sometimes it’s both.
To see the original matrix which I have repurposed here, go and read the original GDI report (which I strongly recommend if you’re interested in learning more about this).
Conclusion: Misinformation isn’t complicated — so let’s not overcomplicate it
If you take only one thing away from this week’s post let it be this: misinformation isn’t beyond your understanding.
If you are a communications professional, then you understand the importance of setting measurable objectives, and working back from those objectives — using audience insights — to develop a strategy, which, with support from colleagues, becomes a tactical delivery plan.
‘Misinformation’ is simply a narrative which contains a mixture of truth and falsehoods which can make our objectives harder to meet.
The same rules of communication still apply:
Facts work better when they’re part of a simple story
Meet people where they are before trying to bring them with you
Make your message simple and repeat it
You’ve got this! Don’t let anyone tell you otherwise.
Thank you for reading this week’s edition of Deeper Truths — the newsletter fighting for a more informed and less divided world. I offer freelance training and consultancy to help bring out your team’s preexisting knowledge and strengths to tackle misinformation.
If you’d like me to run a one hour free taster session with your team, just hit reply!
Interesting - I really like it! Especially regarding a re-alignment to focus on the real-world impacts of adversarial narratives; it can sometimes become too easy to miss the woods for the trees when coming up against this stuff. However (call me a stick-in-the-mud) I do think the classic definitions still have value, especially when trying to change the behaviour of certain target audiences. E.g. is this target audience mostly amplifying misinformation (classic definition) which could mean they might be more amenable to other views if challenged? Or if they are the willing and knowing propagators of disinformation, then your tactics may have to be different, etc.
This is an interesting post, but unfortunately you've sunk your concept with one critical flaw, when you said that misinformation must be "adversarial in nature against an at-risk group or institution". It's a shame you felt the need to do this, because now an otherwise fairly good idea is reduced to becoming just another tool in the Social Justice movement's "oppressed/victim" game.
It's particularly surprising that you've done this now, when the subject dominating the news for the last 2 months (the Israel/Gaza war) shows so clearly how silly and counterproductive it is to base concepts on "oppressed" status and "victim" narratives. So, in that situation, since the bulk of academia consider Israel and the Jews to not be "an at-risk group or institution", then by your own definition it its not possible to spread misinformation about Jews or Israel. I find it hard to believe that you didn't consider this, which makes me suspicious about your motives...