Politics

Stress-test future policies by modelling conspiracy theories before they take hold

Governments “conspiracy test” proposals before launching them, highlighting which elements of a policy could ignite false narratives

December 08, 2022
article header image

Disinformation is already a major force in politics and it now looks certain to be an enduring one. Research shows it’s extremely difficult to identify, debunk and persuade people of its falsehood, while government efforts to tackle fake news often prove to be counterproductive and implemented too late to be effective. 

But there are glimmers of hope. One of these is the practice of “pre-bunking”, which aims to prevent disinformation taking hold by warning audiences about likely false narratives—debunking them before they’re encountered organically. 

This naturally raises the question: how will we predict the conspiracy narratives that will take hold in future? The answer could lie in the potential capabilities of AI. 

Like engineers testing a car in a wind tunnel, honing it for the real-world forces it will encounter, governments should start using AI to test their policies for the gusts and torsion of dangerous disinformation

AI models are creating new works of art based on existing creations, and convincingly generating original texts. This cutting-edge tech could be deployed to simulate the paranoid corners of Reddit and Twitter. If it works, it would put us one step ahead of the curve in our pre-bunking efforts by anticipating likely future conspiracy theories before they take shape.

Of course, we may find that even by modelling millions of conspiratorial rabbit holes, predicting which narratives will resonate will remain challenging. Pre-bunking itself appears far less effective where conspiracies chime with already held political beliefs.

Field experiments demonstrate that conspiracies will always carry great weight for a small population of highly susceptible users.

This suggests that, in addition to predicting how false narratives will take hold, we need approaches to policy development that price in disinformation at the design stage.

I propose a radical step: that governments “conspiracy test” proposals before launching them. By exposing policy to the same conspiracy-generating AI, a “conspiracy impact assessment” might help identify which elements of a policy and/or its communication are likely to ignite false narratives. This test should not serve as some sort of veto, but as a way of identifying the details that bad actors will likely hang conspiracies on. 

Like engineers testing a car in a wind tunnel, honing it for the real-world forces it will encounter, governments should start using AI to test their policies for the gusts and torsion of dangerous disinformation. 




This article first appeared in Minister for the future, a special report produced in association with Nesta.