Skip to main content

Is ‘fake news’ a case of disinformation, mis-information or is it something else?

Fake news
Corporate Reputation
elections
News

We’re now just over six months away from a US presidential election, a General Election in the UK is expected late this year and by the end of 2024, half of the planet, across more than 64 countries will have had a chance to vote.  So, brace yourself to hear more cries of ‘fake news’, particularly from the US Republican presidential candidate. 

Whilst we should celebrate this widespread suffrage, it also offers an unprecedented chance for bad and opportunistic actors to make mischief using increasingly sophisticated means to sway voters and destabilise society. And it is not just democracy at risk, as companies and individuals are increasingly on the receiving end of the growing numbers of bad actors pedalling fabricated information.  As Aldous Huxley said, “An unexciting truth may be eclipsed by a thrilling falsehood”, but do falsehoods matter and if so, what can corporates do to mitigate the reputational risk?

First, we should distinguish between the three most commonly used terms for a ‘falsehood’.  Fake news gained widespread prominence during the 2016 US presidential election when Donald Trump used it to cast doubt on negative media coverage of him.  However, the term itself was first used in the 1890s when sensationalist media reports were common, and actually has its roots in propaganda, a tactic deployed as far back as human records allow.  Fake news can include misinformation, which is content that is not intentionally created to deceive people but is incorrect.   In contrast, disinformation is false information which is deliberately spread to deceive people.  This issue matters because the global economic cost of disinformation is estimated to be $78 billion, and it is only like to increase.

Western governments and their security services have stated that they are mitigating the risk of disinformation impacting forthcoming elections, but are corporates also taking the threat seriously enough?  Since the start of the pandemic, and then the war in Ukraine, there has been widespread use of synthetic content, which has been shared and amplified via social media platforms.  Over the last year or so Facebook has published AI updates related to Meta and Reels, Twitter/X has loosened moderation guidelines and TikTok has become one of the fastest-growing platforms ever, reflecting the growth in short form video content.

The development of generative AI has been rapid since the public launch of ChatGPT in November 2022, and the latest version, GPT-4 in March 2023. Advances in generative AI mean that large language models (LLMs) can already create text that is indistinguishable from organic text. There have also been major advances in the creation of deepfake photos, audio, and videos.  On top of this, the cost of training large language models is falling drastically: while it originally cost millions of dollars to train GPT-3 in 2020, in 2022 MosaicML was able to train a model from scratch to reach GPT-3 level performance for less than $500,000.

This reduction in the cost and time taken to produce and disseminate information is already leading to a more crowded information environment.  This, coupled with the widespread use of AI, is making corporates much more vulnerable to information threats, where even disgruntled employees or small activist groups will have the ability to mount targeted, sophisticated and effective campaigns of disinformation.

These bad actors (or bored students!) are now able to use AI to more quickly analyse data and generate hyper-targeted and personalised information and narratives.  As a result, disinformation is likely to shift from a one-size-fits-all approach to more personalised narratives, which are much harder to combat and rebut.  For example, disinformation campaigns can be used to target investors and manipulate stock prices by short-selling.  Other examples may include claims that a food manufacturer has been a victim of disease or poisoning in its supply chain, resulting in the loss of consumer confidence.

The reputational risk from disinformation is significant.  Customers can lose faith in a product or a business as a whole and investors might take flight driving abnormal trading activity and increased volatility, along with the brand damage caused by incorrect media and social media content.  As we know, it takes years to build a well trusted brand, and this work (and value) can be destroyed in minutes.

Policy-makers are taking action to try to moderate content such as the EU’s Digital Services Act, Australia’s Online Safety Act and the UK Online Safety Bill, amongst others.   But what steps should companies be taking to address the risk of disinformation?  Most simply there are three steps: proactive educational campaigns, reactive rebuttal of false claims and finally legal action against defamatory statements.

Countering disinformation requires a deep understanding of the nature of the information threat and subject characteristics in order to prepare an effective response.  A challenge companies face is early detection, to mitigate the chances of something going viral as well as choosing the optimal response from many options, ranging from prebunking to adjacent messaging.”, says Sahil Shah, Founder and Managing Director of Say No to Disinfo.

Unless pre-emptive action is taken, the repetition of false claims increases belief in those claims, a phenomenon known as the illusory truth effect. When media sources, political elites, or celebrities repeat misinformation, their influence and repetition can perpetuate false beliefs.

The first step is to identify potential reputational vulnerabilities to assess which threats are likely to have the greatest organisational impact. Our digital and AI team works with specialists to ensure that there are tools (often AI based) and procedures in place to identify key stakeholders, their digital footprints and where they’re most active online.  We then convert vulnerability analysis into words and phrases to facilitate alert management.  All of this content and then subsequent alerts are synthesised and analysed to identify coordinated disinformation campaigns.

Companies now need to be alive to the fact that disinformation campaigns can include real time user engagement with conversational AI or chatbots which can automate engagement with targeted individuals, using large volumes of data, machine-learning, and Natural Language Processing to imitate human interactions, recognising speech and text input and generating a response.