Communicating with investors in a ‘post-truth’ world: The risks and considerations caused by generative AI
Effective investor engagement hinges on trust, built on the core principals of transparency, consistency and delivery. While these principals are important day-to-day to establish a robust valuation for your shares, deep trust and understanding becomes ever more business-critical when preparing for a transformational event, such as a transaction or a fundraising, or in defending yourself from a hostile takeover or activism threat.
This has become more and more complex to achieve over the last decade with the digital age forcing IROs to build multi-channel investor communications programmes which take a broad range of global stakeholders and issues into consideration at the same time. But now, with AI representing the fastest-moving technological change we have ever seen, we are on the cusp of what is regularly being coined as the ‘post-truth age’, signalling the start of a brave new world of investor communications. So, what are the risks and strategies that need to be considered by business leaders?
Firstly, generative AI is set to turbocharge the spread of misinformation and disinformation, and by some estimates, AI-generated content could soon account for 99% or more of all new information on the internet. This context is certain to create an environment in which a plethora of new corporate risks and reputational weak spots can thrive.
Industry experts have noted the tendency of large language models to be "confidently incorrect" with a predisposition to assert falsehoods as fact. These so-called hallucinations have the potential to create real harm, particularly when a company is preparing for a transformational event. It is at these moments that the need for clarity on your investment case, an understanding of what you stand for and trust in your management team is at the top of the priority list. Plus, with ESG requirements to navigate, incorrect association with the wrong types of companies, governments or practices could call your integrity into question and limit your investment horizons.
A recent search for SEC Newgate on one of the large language models inaccurately listed our business as representing a range of firms which could significantly hinder our B Corp certification process and our relationships with deep green groups. While proof and explanations can be provided, it will be interesting to assess what impact this will have on productivity. Ultimately, unless carefully planned for and rapidly detected, a misinformed article relating to a business’s core market, for example, has the potential to diminish a company’s valuation. This is particularly true for listed companies where reputational issues or inconsistent narratives can have an immediate impact on a business’s market capitalisation.
Forensic monitoring has long-been a key fixture of the communications landscape. Now, this needs to evolve to ensure it includes tools which can immediately identify and evidence AI-generated misinformation to ensure that leaders are capable of setting the record straight to the market as quickly as possible to limit the repercussions.
Another consideration is that generative AI helps those who deliberately seek to mislead. AI has the ability to create convincing, compelling copy, imagery and video which can be highly damaging and cyber-criminals now have new tools in their toolbox which can be rapidly deployed. Now, crisis planning needs to build in a new array of scenarios and companies which are defending themselves from activism need to plan for how they will respond to AI-fuelled skullduggery and be equipped to convincingly combat any AI fuelled claims.
Lastly, there are a couple of BAU considerations for leaders and IROs of listed companies to consider. The first of these is around discoverability of your company. With chatbots set to be integrated into search engines, for listed companies it will be important to ensure that they are included in online publications with high provenance, so that they are well placed to be found should a retail investor ask ChatGPT ‘what are the most attractive energy businesses to invest in?’, for example. Proactive communications and a strong digital presence will be even more important to ensure you are found in the right places. The other thing is that many newswires are already using AI to develop articles relating to company announcements and Results, meaning that it is likely that there is a winning formula for presenting financial information.
The final consideration is around the use of Large Language Models in developing content which has commercial sensitivity. Companies and those that advise them run a huge risk of breaking FCA rules or feeding the spread of misinformation if using chatbots to develop market and commercially sensitive communications. This is a reputation risk in its own right, and therefore guidelines should be written for employees to ensure that AI is being used in the right way within the business.
In summary, those businesses which use AI in the right way, and have an understanding of how to effectively prepare for managing the reputational risks it creates are likely to be rewarded with a stronger valuation than those that don’t in the future. It is now more critical than ever to understand these complex AI channels in order to ensure you are well placed to maintain the clarity and understanding of your business you have achieved to date.