4 Comments

Interesting - I really like it! Especially regarding a re-alignment to focus on the real-world impacts of adversarial narratives; it can sometimes become too easy to miss the woods for the trees when coming up against this stuff. However (call me a stick-in-the-mud) I do think the classic definitions still have value, especially when trying to change the behaviour of certain target audiences. E.g. is this target audience mostly amplifying misinformation (classic definition) which could mean they might be more amenable to other views if challenged? Or if they are the willing and knowing propagators of disinformation, then your tactics may have to be different, etc.

Expand full comment
author

Thanks for reading and thanks for the feedback Matthew. The truth is, the purpose of the post isn't necessarily to advocate for a specific definition but to challenge the existing definitions and 'start a conversation' (which is much easier...)

It's worth saying that Global Disinformation Index developed their definition specifically for disinformation (rather than misinformation), and it's my view that it can be used as a catch all.

I suppose, if I were to attempt to 'defend' my position, I'd argue that we could decouple these two things: whether something was spread deliberately with knowledge that it is false and how likely we are to change someone's mind. I'm sure we could find many examples of instances where we could technically categorise a post as misinformation but the individual is clearly 'dug in' and we're unlikely to be able to shift them.

I do however see huge amount of value in assessing 'depth' of belief in our audience segments when thinking of our strategic response. This could be:

1) Looking for persuadable audiences who feel ambivalent about hot button culture war issues where disinformation and misinformation are rife

2) Assessing whether the subject matter is apolitical enough to correct it directly without risk of making people more defensive (e.g. Crypto scams, extreme weather warnings, etc)

On point 2, I've found the UK Government Communications Service Wall of Beliefs framework to be a fantastic model for making these kinds of decisions. It's also something that I'd advocate for more targeted primary research on among strategists (i.e. Identifying damaging narratives and beliefs online and testing them with our audience to understand breadth and depth of belief). One for a future post perhaps.

Apologies for the ramble, but I appreciate you engaging and please keep the feedback coming.

Expand full comment

Stefan - we are 100% on the page. II look forward to your next post!

Expand full comment

This is an interesting post, but unfortunately you've sunk your concept with one critical flaw, when you said that misinformation must be "adversarial in nature against an at-risk group or institution". It's a shame you felt the need to do this, because now an otherwise fairly good idea is reduced to becoming just another tool in the Social Justice movement's "oppressed/victim" game.

It's particularly surprising that you've done this now, when the subject dominating the news for the last 2 months (the Israel/Gaza war) shows so clearly how silly and counterproductive it is to base concepts on "oppressed" status and "victim" narratives. So, in that situation, since the bulk of academia consider Israel and the Jews to not be "an at-risk group or institution", then by your own definition it its not possible to spread misinformation about Jews or Israel. I find it hard to believe that you didn't consider this, which makes me suspicious about your motives...

Expand full comment