Illustration by Vincent Kilbride

Want to stop harmful tech? Just say no

Why tinker away with problematic digital tools when we could build something entirely new?
July 19, 2023

In 2020, the American Institute of Architects banned its members from designing spaces for execution or prolonged solitary confinement. The ban was the result of a campaign launched in 2003 by Raphael Sperry, founder of Architects/Designers/Planners for Social Responsibility. Sperry and his colleagues argued that “the quality of design for a prison or jail is of secondary importance when the people inside are unjustly incarcerated in the first place.” Given a wealth of evidence that the American criminal justice system disproportionately impacts people of colour, as documented in books such as Michelle Alexander’s The New Jim Crow, there is a strong argument that refusing to participate in the construction of US prisons altogether is the ethical choice.

The refusal to participate has at least one compelling counterargument: harm reduction. If America is going to keep imprisoning a significant percentage of its population, shouldn’t architects work to reduce the harm inflicted by prison designs? If enlightened architects refuse to design facilities for solitary confinement, won’t they inevitably be designed by people less concerned about human rights?

Sperry’s refusal argument ultimately won out with regards to execution and prolonged solitary confinement (the latter of which is so psychologically damaging that UN officials consider it a form of torture). In this scenario, those advocating the harm-reduction argument faced insurmountable challenges.

I was introduced to the tension between the strategies of harm reduction and refusal in a different field—artificial intelligence—by a doctoral student at MIT, Chelsea Barabas. A scholar of technology and justice, she was working with a team that hoped to improve the algorithms recommending whether a criminal defendant should be released from pre-trial detention.

But when she talked with community organisers, Barabas was persuaded that this work risked perpetuating a greater injustice. By being part of the exercise in modelling a defendant’s behaviour, the team would be complicit in the problematic way that society characterises marginalised individuals in custody, complicit even in the wider harm inflicted by the American prisons system itself. Instead of mapping “down” in this way, the team inverted its perspective and modelled the decision-making of the judges, creating a tool that measured the risk a judge would unlawfully deprive someone of liberty.

Barabas writes that “refusal is a beginning that starts with an end.” In other words, for architects or technology experts to engage in constructive refusal, it’s not enough for them to resist building new carceral structures; that decision needs to be a first step towards addressing what’s wrong with incarceration in America more widely. Sharlyn Grace, a lawyer and community organiser working in Illinois, explains: “The goal of abolition… requires that we do as much as possible to shrink the system now, even while we don’t yet have the power to completely end it.” This is the counter to the harm-reduction argument: even if we make less harmful prisons or less oppressive algorithms, we are still working to perpetuate, not shrink, unjust systems.

The field of AI is starting to wrestle with the dynamics of resistance, harm reduction and refusal. But, in the process, it has gotten tangled between immediate and theoretical long-term threats, and the level of importance it makes sense to attach to each.

There are real, current problems with AI systems that might be sufficient reason to rethink their deployment. Research demonstrates that computer recognition systems perform less accurately in identifying women and people of colour than they do identifying white men, unless carefully tuned to compensate for biases in training data. These systems, liable to throw up “false positive” matches, have already been misused to arrest innocent people, including Robert Williams of Farmington Hills, Michigan, a black man misidentified from blurry CCTV footage by a $5.5m facial recognition system. Accused of stealing watches from a high-end Detroit store, Williams was held in custody for 30 hours before police admitted that the computer got it wrong.

A 2018 book by academic and journalist Virginia Eubanks, Automating Inequality, examines three algorithmic decision-making systems employed by public authorities in the US that have caused unintentional—though predictable—harm. A system that matched unhoused people to housing in Los Angeles encouraged them to document their dependency on illegal drugs, which increased their eligibility for subsidised shelter. Perversely, this information was also easily accessible to law enforcement, raising concerns over whether they could use it to arrest people. On its own terms the system “succeeded” in these cases, as incarceration was considered “secure housing” in the methodology.

Against this background, consider the open letter recently released by the Future of Life Institute, a non-profit campaign group advised by Elon Musk, which called for a pause of at least six months in AI development, until concerns about the existential risks posed by AI can be addressed. Meanwhile, the Center for AI Safety has released, to great fanfare, a statement of a single sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The two statements—signed by thousands of researchers, as well as luminaries such as Bill Gates and Sam Altman of OpenAI, the creator of ChatGPT—have prompted a flood of breathless op-eds worried about the future of humanity in an AI age.

The field of AI has gotten tangled between immediate and theoretical threats

Given the drama of these warnings, the proposed solutions seem like weak tea. The recommended six-month pause shows a surprising sense of optimism about humanity’s ability to quickly address existential dilemmas through policy and regulation, given our lack of progress in regulating our way out of climate change. Indeed, AI researcher Eliezer Yudkowsky, who has been prophesying superhuman killer AIs since the early 2000s, recently proposed tracking the sales of GPUs (the number-crunching processors used to build powerful AI systems). He also recommended that governments “be willing to destroy a rogue datacenter by airstrike”. His position could not be clearer, as his editorial in Time magazine was titled: “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down”.

There’s reason to be sceptical about at least some calls for an AI pause. On a basic level, these proposals can be a form of marketing hype: our technologies are so powerful that they deserve as much attention as pandemics and nuclear war, as the Center for AI Safety literally proposes. Future of Life’s letter warns primarily of distant-future threats—the transformation of work as millions of jobs are obviated by AI systems, as well as the AI extinction scenario—and only one near-future concern: political disinformation generated by AI tools.

But AI is already harming humans. And it’s disproportionately harming people of colour, welfare recipients and the unhoused. To imagine AI inflicting harm on the privileged requires a leap of faith in the power of technology. Understanding existing harm to the vulnerable does not—though it requires the sort of careful investigative reporting done by Eubanks and others.

In the past few years, more than 20 US cities and states have banned or restricted the use of facial recognition software by police (though some have subsequently reversed their decision). A troubling analysis of the use of live facial recognition by police in Britain has prompted calls for a similar ban. One response to these bans is to try to improve the accuracy of facial identification and reduce the racial bias in these algorithms, as a way of minimising the harm. But refusal may be a more appropriate response, at least in cases where deployment of surveillance systems means poor and marginalised people are more likely than the wealthy to be penalised.

What with increasing attention on AI through systems such as ChatGPT, it’s a good time to have a robust public debate about the use of these technologies. But a responsible conversation should focus less on future scenarios of robot hyperintelligence and more on the actual, real-world harms that can be inflicted when we outsource decisions about human lives to machines. Let’s spend less time worrying about Skynet and more time considering how we might shrink systems that harm the most vulnerable in society.