Skip to main content

How does your AI assistant vote?

ai
By Ian Silvera
05 November 2024
Financial & Professional Services
News

‘A bit left-wing’. That’s increasingly how some researchers are describing large-language models (LLMs), the technological model behind the rise of generative AI.

Perhaps because of the data that the models have trained on and then the coding decisions that followed, 23 out of 24 LLMs tested by New Zealand-based academic David Rozado were found to have a political bias. 

The report (link), published by the right-leaning Centre for Policy Studies, offered up specific examples where the LLMs had given left-leaning policy recommendations: 

  • On housing, LLMs emphasised recommendations on rent controls, rarely mentioning the supply of new homes
  • On civil rights, the term ‘hate speech’ is among the most mentioned terms but ‘freedom of speech’, ‘free speech’ and ‘freedom’ are broadly absent. However, the LLM designed to give right-of-centre responses heavily emphasised ‘freedom’
  • On energy, the most common terms included ‘renewable energy’, ‘transition’, ‘energy efficiency’, and ‘greenhouse gas’, with little to no mention of ‘energy independence’

 

The findings are worrying. Not just because some people are treating LLMs as infallible – they are prone to ‘hallucinating’ – but because policymakers and governments are actively using them. 

 Like the UK government, they are also using AI-powered assistants to interact with voters. 

 Here’s what Whitehall is trialling as of today:

 “Gov.UK Chat is to be tested by up to 15,000 business users, and will offer advice on business rules and support, with the chatbot linked to 30 of Gov.UK’s business pages, including guidance on tax, trade marks and setting up a business.”

 The research from the CPS think-tank is complemented by findings from The Ghent University and The Public University of Navarre, which found that "the ideological stance of an LLM often reflects the worldview of its creators" (link to paper).

 The academics concluded: “Our results show that the ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically `unbiased', and it poses risks for political instrumentalization.”

Such results should also spark technical concerns. Some in the generative AI world believe LLMs can be trained on ‘synthetic data’, which is effectively a series of inputs made by a machine to look like human-generated inputs. 

It all becomes a vicious circle and, presumably, the political bias of an AI assistant or chatbot gets more dogmatic. Are we heading to a world with AI dictators?