Technology

AC Grayling: it is humans, not driverless cars, that are the problem

The philosophy of autonomous vehicles

March 03, 2016
Brian Torcellini, leader of Google's driving operations team, poses for photos next to a self-driving car at a Google office in Mountain View, California. ©Jeff Chiu/AP/Press Association Images
Brian Torcellini, leader of Google's driving operations team, poses for photos next to a self-driving car at a Google office in Mountain View, California. ©Jeff Chiu/AP/Press Association Images


Brian Torcellini, leader of Google's driving operations team, poses for photos next to a self-driving car at a Google office in Mountain View, California. ©Jeff Chiu/AP/Press Association Images

On 14th February, one of Google's driverless cars crashed. Up to that point, its cars had driven more than 1.4 million miles without being responsible—or even partially responsible—for an accident. But last month, in Mountain View, California, one of the tech-giant's autonomous Lexus SUVs scraped the side of a bus while travelling at two miles per hour. It had been trying to merge into the bus's lane in order to avoid sandbags on the road ahead.



The crash raises many philosophical questions—as does the prospect of driverless cars more generally. Will we necessarily encounter problems when introducing a strictly rule-governed, logical machine into a human domain? Or should we look at such crashes as mere lessons to be learnt from; not signifiers of a deep-lying compatibility issue? I picked the brains of AC Grayling, philosopher and Master of New College of Humanities, who shared his thoughts on this issue and others in light of the recent crash.

AC Grayling: We are already quite used to automatic vehicles like the Docklands Light Railway in London and driverless underground trains. We have already got automatic systems in place... What is new is having a robot car finding its way in amongst traffic and pedestrians. Because we're at the relatively early stages of development it's inevitable that there would be glitches—even serious ones, but we will learn, I suppose.



I don't imagine that it's beyond the wit of our electronic engineers and computer whizzes to create systems which would be probably on the whole and on average safer than human beings are. Everyone worries about the fact that, A) human fallibility would be built into these systems and B) somehow the thought of automatic systems which are sentient, unreflective, and have no sense of compassion or concern for human beings are malevolent. But they aren't. They're neutral, and if properly designed, constructed and managed they should work OK, and probably be safer than human systems.

Alex Dean: OK, it's not that the technological and human domains are necessarily incompatible, you think. Rather, it's a contingent fact that the technology isn't quite sufficient yet for them to avoid crashing? Do you think the technology may one day get there?

That’s my thought, yes. I think we could get to a stage where there are far fewer crashes with robot cars than with human ones.

Google's cars are doing millions of miles of test runs and then one of them crashes at two miles per hour (as happened in this latest incident) and there's a fuss. Do you think we irrationally hold these driverless cars to a higher standard than we otherwise would just because they're alien and strange?



Yes we do. You're dead right about that. It's sheer prejudice, really, and anxiety, because there isn't a human being driving. The fact that they're safer than if there was a human driving there doesn't seem to enter the picture. It's the facelessness of them—the idea of the monster automaticity of it that worries people, and completely irrationally.

What do you make of the idea that driverless cars will always be irrational because the humans who designed them are?

No...The idea of these automatic systems is that what they embody in fact is our best rationality....We will manage rather as we do in [the rest of] society: if you think about institutions such as the procedure that a court has to follow, it’s designed to get us away from the arbitrariness of whether or not you have a quarrel with your wife this morning, or whether you're feeling down in the dumps and so on. To get away from the influence of the non-rational aspects of ourselves. And what we're doing with these systems is we're building in our very best rationality, so we shouldn't be prejudiced against them; on the contrary: we should be welcoming them because they promise to be better on average—kill fewer people, have fewer accidents, and make less of a mess—than we ourselves do.

I wonder if there's a risk that human-driven cars will crash into driverless cars. Humans are not used to sharing the road with vehicles that behave in such a way as Google's cars do; they are used to other cars dithering at junctions, for example.

I think you're right about that. The problem will come from the human-driven cars, not the driverless cars. But people are quick adapters. There's that wonderful joke, isn't there, about that man who tells his friends in the pub that he is going to take his car on holiday to France, and his friend replies "Well, you've got to remember that they drive on the other side of the road there." A couple of days later he reappears in the pub absolutely ashen-faced, and his friends say "What's the matter?" He says "Well, you told me about driving on the other side of the road. I've been practicing, and it's incredibly dangerous!"

People get on the wrong side of the road in France and occasionally mistakes happen, but you very quickly adapt, and I expect actually that it would be very similar to that."

A well-known philosophical dilemma is the “trolley problem” developed by Philippa Foot. It runs: "Imagine a trolley is out of control. You can do nothing, and it will hit and kill several people, or you can divert it, killing one person. What should you do?" Does this philosophical dilemma apply to the decisions a driverless car may have to make when faced with crashing into people?

No, I don't think so. I think in the case of a driverless car the calculation can be a very straightforward, simple utilitarian one: it will always kill the one rather than the six, it will always kill the dog rather than the human. It will be programmed to make those sorts of choices. You see, the trolley problem is really only a problem for human beings, for the following reason: the one person might be your mum, so you'll choose to kill the six rather than her.

Or in a version of the trolley problem where instead of pulling a lever you have to push a fat man off a bridge [to land in front of the trolley, thus stopping it] the fact that you have to make contact with the person who's going to be killed in order to save the six will inhibit people far more than if it's simply a case of pulling the lever. All those non-rational aspects of what goes into a calculation of that kind is a very human thing.

But a driverless car will not have those difficulties. Because as far as it is concerned, in a situation of risk you just go for the simplest and most defensible calculation, which would be: if you were to chose between killing one person or a number of people, you kill the one.

Is there an issue with driverless cars when it comes to driving conventions in different countries? Will they have to be programmed differently for countries in which drivers behave differently?



The simplest solution is to programme these cars for the worst possible scenario: New Delhi, or Rome… And then it is maximally cautious and safe and collision-averse and so it can go anywhere in the world, and it would be extremely plodding and irritating in England but it would be very safe in New Delhi.

Alternatively, and this is more technically elaborate but wouldn't really be all that difficult for the programmers of the cars, you select which country you're in, or even the car itself might select it by recognising its GPS position, so it recognises it’s in Rome and behaves accordingly. In each case, what the programmer would have to do is say: given the prevailing conditions in traffic and the habits of human drivers in these countries, what is the safest and least risky way we could set this up?

If in certain countries a driver has to force their way out of a junction in order to proceed, will driverless cars have to be programmed to drive “dangerously” in this way? If not, wouldn't they be very slow?

Other driverless cars would recognise your dilemma and let you in… you would programme the driverless cars to be conscious of the traffic environment and so if there's a great stem of cars crossing a T-junction and a queue of cars waiting to get in, then the driverless car will slow down and flash its lights at its counterpart in the junction and it will just proceed ahead into the queue.

Ok, so it might be a problem in the transitional period, but wouldn't be beyond that?

Yes. It is the humans who would be problematic!

What would happen if there were a fatal accident involving a driverless car tomorrow? Would the whole experiment be written off?

There would be a huge knee-jerk reaction and there would be outcry and it would set things back a little bit and so on but the kind of discussion that we've been having which says "think of the positives here, think of the fact that we learn from this and it's less likely to happen as a result, and eventually we're going to get something that is really optimal from the safety point of view. I personally think that this kind of technology, its time has come. And all that we're dealing with now are in themselves the non-rational responses to it.

Now read AC Grayling's latest article for Prospect: Ed Miliband is right and wrong about poverty