Politics

The dawn of the AI election

Around the world, more people will go to the polls in 2024 than have ever voted before. It comes in the wake of an artificial intelligence revolution which we are still struggling to comprehend. Put the two together and it could be a recipe for disaster

January 04, 2024
Image: Prospect / Alamy
Image: Prospect / Alamy

FDR was the first president who mastered the new medium of radio. Kennedy was famously the first president of the TV age. Trump made Twitter his own. By the end of the year we may well have the first AI president. With politicians also gearing up for elections in Indonesia, India, South Africa and—depending on the caprices of our constitution—Britain, we are about to find out what this new technology means for our politics, as AI tools smash into a political system still struggling to work out how to understand—let alone regulate—them. 

This means that the UK is now in danger of losing control of its democratic destiny. Our democracy has not yet fully adjusted to the nature of campaigning on algorithmically driven social media platforms, and the advent of so-called “generative AI”, which can produce convincing human-like text and compelling fake imagery in a matter of seconds, will turbocharge the threat posed. The parameters set to govern these AI tools will be shaped in Silicon Valley and enforced in Washington—maybe Brussels. Without a programme of non-partisan UK government action, a simple question arises: can we protect our own elections? 

Like moths, politicians flutter towards the bright light of attention, and attention is now found online. According to Ofcom research, for those under the age of 44 the internet eclipses TV as the leading source of news. Advertisers now spend most of their budgets digitally, and during the 2019 general election campaign the spending of the political parties mirrored this. 

On one level, digital media has fragmented attention. There are fewer and fewer “water cooler” moments, when the nation comes together to discuss a common media event. These have been replaced by millions of unique interactions across different platforms, as users delve ever deeper into algorithmically curated worlds. But this belies the underlying concentration of activity, and hence power, in the hands of half a dozen US (and one Chinese) platforms: Google (which owns YouTube), Meta (Facebook, WhatsApp, Instagram), Amazon, Netflix, Apple, Microsoft and now TikTok. 

Traditional publishers who hope to survive must rely on these companies to send traffic their way and must play the games required. Screams about the travesties of social media polluting our news ecosystem are—at least partially—the sound of the old media gatekeepers losing power. 

Elections are now fought on these digital platforms. On foreign soil. Literally, as that’s where many of the server farms are. Critically, the relationship of these platforms to the UK is purely commercial. The regulators which they really care about are in Washington or, increasingly, Brussels. For TikTok (likely the breakthrough platform of the 2024 electoral cycle), add Beijing.

It is important to understand that 2024 will be radically different even from 2019. According to Collins Dictionary, the word of 2023 was “AI”. The launch of ChatGPT, the fastest-growing consumer application in history, has thrust generative AI into the limelight. Such programmes have already amassed billions of interactions—and spawned the release of many competitors—and their capabilities continue to dramatically increase, representing a fundamentally new model for the dissemination of information through society. 

That said, AI in politics is not entirely new. In the 2015 UK electoral cycle, machine learning was used to label voter lists with a score suggesting how likely people were to vote for a given party. Debate over such practices intensified in the wake of the 2016 EU referendum campaign. One major row focused on the data analytics firm Cambridge Analytica, which found itself at the centre of a furore over the alleged use of personal Facebook data in political ad targeting. The firm was ultimately found not to have been involved in the Brexit vote, but the information commissioner warned of “systemic vulnerabilities in our democratic systems” from technology applied to voter data. 

The new wave of tools perform a different role. At the heart of generative AI’s functionality is the ability to make a strong argument, something usually considered a politician’s core skill. Generative AI enables fast, cheap and infinitely scalable content production. These are magic words for campaign managers, who have relatively small budgets and mere weeks to reach millions of voters (many of whom tune out from broad-brush methods of political communication).

One way to think about potential use cases for AI is: “what would you do with 1,000 interns?” To many companies, that may sound like a provocative question. To campaigns, traditionally reliant on interns and volunteers, that will sound like a huge opportunity.  

There are many potential use cases for AI in political campaigns. Start with faster media operations: a platform speech can be turned into 100 different video adverts and 100 tailored press releases in a fraction of the time such a task would historically have taken—if anyone would previously have embarked on it at all. But that’s just the tip of the iceberg: the speech could have been rinsed through virtual focus groups generating fast feedback on ideas from representative artificial personalities. This is already happening to some extent in the corporate world, and the ability to test potentially thousands of (themselves AI-generated) lines and ideas overnight will be attractive in the heat of a campaign. 

Chatbots could be deployed which allow for instantaneous interventions in social media channels wherever political topics are being discussed. These might be personified as “The Party”, disembodied politicians, on-brand avatars or—if regulation really fails—just pop up as “concerned citizens”. 

Meanwhile, the holy grail of personalised communications at scale will potentially offer communication in a “bubble of one”. While this poses risks—of increased polarisation and contradictory promises being made to any number of individuals—it may also offer opportunity. One reason for low voter turnout is that citizens feel that politicians do not talk about the topics that matter to them or in a style that resonates. AI could change that. We are some way from the more exotic of these scenarios—but such use cases will arrive, and sooner than you think. 

Earlier this year, in what was doubtless for him a more innocent time (before he was fired and then almost immediately reinstated), Sam Altman, CEO of OpenAI (the company behind ChatGPT) conducted a global tour aiming to worry opinion formers about the existential risks posed by AI while simultaneously reassuring political leaders that it was safe in his hands. I was in the meeting room with 30 other nerds at UCL as he described his concern about the potential impact of ChatGPT on politics, based on just how good it may become at personalised persuasion—content that could be deployed at scale so that everyone would get a unique set of messages. I was pleased to find he was worried: he should be. 

Spending on US elections runs to many billions of dollars and this time powerful AI tools will be at the top of the shopping list. But is this going to happen in the UK?

Rishi Sunak is by all accounts fascinated and excited by the technology and, following the AI Safety Summit at Bletchley Park in November, presumably has the world’s AI leaders on speed dial. The Labour party has been known to hire in leading US consultants—who would today be steeped in the excitement around AI’s possibilities—and the Liberal Democrats’ relative poverty means that they have digital innovation in their DNA—they can’t afford not to. As people say, do the math.  

The impact will be to privilege capital over activist labour in the new electoral age. While many AI programmes are available to all, the ability to tie them to targeting data, build or buy in the most advanced optimisation tools, and to acquire advertising slots on digital platforms is all dependent on hard cash. If the next UK general election is fought in a cold and wet November, when getting volunteers out to knock on doors may be a challenge, then the advantage to those who can afford to deploy this technology will be even greater. Strengthening central organisations rather than local constituency campaigners is rarely healthy for a democracy. 

If things weren’t challenging enough, all is not well with this technology. The Cambridge Dictionary’s word of 2023 was “hallucinate”. Generative AI is a good pattern predictor, but it will privilege a pattern over the truth—truth being a concept of which it has no grasp. This means that things simply get made up, or “hallucinated”. Anyone using the tools risks creating misinformation. If, as seems plausible, over 90 per cent of digital content ends up AI-generated in the next few years, then the risks are obvious.

And that is before the bad actors get to work. The most high-profile (and therefore possibly the most guarded against) use case is deep fakes, where an image or video has been altered so convincingly that it appears someone has done or said something they did not do or say. One such forgery has already been unleashed on Keir Starmer, an artificially generated video purportedly showed him abusing party staff. 

There are several potential consequences arising from these digital distortions. First is the immediate campaign impact if they are deployed in the form of a direct political assault. Second, deepfakes allow for plausible deniability. They provide latitude for those accused of bad action to claim that any proof is made-up. Perhaps less understood at this stage is the opportunity they provide to direct the focus of political attention. The point of the infamous Brexit bus, after all, was to trigger discussion on the topic of UK funding of the EU—the more the exact £350m number was disputed, the more salient the principle became that we were sending money to Brussels. 

Other malicious uses might include turbocharged bot farms—on speed—or the deliberate generation of fake scientific evidence (potentially thousands of papers’ worth) to misdirect debate on issues like global warming. Aggressive phishing attacks on campaigns using, for example, AI-equipped voice software, with people calling campaign HQ to pose as a member of the party leadership, exacerbates the cyber-hacking risk that already plagues elections.

And if misinformation and disinformation were not enough, what about Californication—the way US west coast assumptions and ideologies are laced into the new technology? Generative AI reflects the data that it was trained on and the methods used to train it. The core data is the historic internet, heavily tilted towards US content, and the training processes are governed by California-based teams. This training is becoming an increasingly political act. Just as important as the assumptions and worldviews of those who initially guided these projects will be the reaction from those with opposing ideas. Elon Musk appears to be determined that the Grok system, partially generated from privileged access to real-time data from his X (formerly Twitter) platform, should be more “free speech” oriented than OpenAI’s ChatGPT. US culture war mores risk infusing everything as use of these tools proliferates. 

Meanwhile, the power imbalance between the tech platforms and national regulation grows.

Anyone who has seen a governing party up close as it negotiates the terms of a possible prime ministerial appearance at a TV debate knows who sets the rules of the game in the analogue world. Think of Boris Johnson’s brazen unwillingness to submit to an Andrew Neil interview in 2019. The power dynamics in the digital world are very different—just look at the body language of Sunak’s interview with Musk at Bletchley Park in November. 

It is not by chance that the digital platforms that control the distribution of and access to online information are also seeking to dominate the next generation of AI. If nothing else, the computing resources required are beyond the capacity of most national governments, including that of the UK. Those in positions of leadership at these firms are aware that they need to demonstrate responsibility.

Google and Adobe are both working towards the watermarking of AI-generated content. The jury is still out on these types of countermeasures. Detectors for spotting AI-generated-text are deeply fallible: easily fooled and prone to making false accusations. In 2018, Facebook put in place an editorial Oversight Board in the hope of sharing responsibility for controversial decisions like the de-platforming of President Trump, and this could be one model for reckoning with the new moderation challenges thrown up by powerful artificial intelligence. Microsoft has recently announced a suite of perfectly respectable and responsible measures to protect election integrity in the new technological age. 

However, these approaches are coming from overseas tech companies essentially offering to mark their own homework. The UK should establish proper supervision of the industry at home. The government’s approach to regulating AI is largely to leave it to sectoral regulators. There is a logic to this; but only if the regulators have the powers, the personnel and resources to supervise the tech firms properly. 

Watching Ofcom twist and turn on the issue of GB News and political balance, one wonders how the media regulator will approach the coming election. The Electoral Commission is, at the time of writing, appointing a new chief executive. It is to be hoped that they will have interest and skills in this area. The elections regulator itself unfortunately lacks the full suite of powers needed—and the government has recently downgraded its independence. The Information Commissioner’s Office has muscle but is focused on data privacy. A framework for inter-regulator co-operation has been set up; whether it will be agile enough to respond at the speed the political cycle requires is questionable. 

Creating a clear set of guidelines for the political use of AI in 2024 would not be as hard as one might think. These should range from clear rules on transparency (both to the voter receiving AI-generated messages and to the regulators monitoring the campaigns), to applying GDPR law forcibly and proactively, always having humans in the loop of AI deployment, having an expectation that parties deploying AI will be cognisant of issues around bias, and ensuring that parties build in internal processes to check for AI hallucination and have clear leadership responsibility for deployment. The UK has the talent to develop an approach like this—and the institutions to provide oversight and challenge.  

Effectively handing regulation over to overseas commercial players is especially dangerous in this year of elections. Their attention will not be on the UK but elsewhere: oversight teams at the platforms will be stretched thinly and probably exhausted by the year’s end. 

And not all elections are created equal. Hanging over everything is the coming Trumpfest in the US. He is now overwhelmingly likely to be the Republican candidate. Digital platforms will be aware of his propensity for playing fast and loose with the truth, but for reasons both of democratic propriety (respect for a presidential candidate), and self-interested fear of his potential presidential revenge, they will want to avoid censoring his campaign, barring the most egregious behaviour. Platforms will likely shape their standards around him. These will need to be broadly global for reasons of clarity and efficiency—and so risk being driven towards the lowest common denominator. 

The wilder ranks of Remainers have questioned whether the Brexit referendum result was delivered by a Russian influence operation. There is limited evidence of that being a real driver (spoiler alert: we did it to ourselves), but the potential for questioning or subverting a legitimate UK election only mounts when the referees are so weak and the technology so strong. 

In the event of, for example, a hung parliament, think what scope there will be for mischief. Who then holds the line? The new King, facing his first constitutional crisis? If this is a risk for the UK—with one of the world’s most sophisticated AI communities and a mature political culture—then what might it mean for everyone else as we enter this year of elections?

This is a novel moment. For hundreds of years our electoral system has—despite some blips—become progressively more transparent and responsive. New technologies—from radio to the TV to the internet—have opened politicians up to more scrutiny. AI risks putting this in reverse: hiding our leaders behind ever more personalised and targeted messaging. Even worse, the UK will no longer in practice set the terms of its own elections. This is partly path-dependency; the end result of a failure to sustain an independent digital ecosystem. But it is also the result of political choices made, or not made. 

There is nothing inevitable about any of this, but unless we make different choices—setting clear guidelines, empowering our umpires while we still have time—the next election risks amounting to an abrogation of sovereignty that no nation should be willing to endure.