
Brian Torcellini, leader of Google’s driving operations team, poses for photos next to a self-driving car at a Google office in Mountain View, California. ©Jeff Chiu/AP/Press Association Images
On 14th February, one of Google’s driverless cars crashed. Up to that point, its cars had driven more than 1.4 million miles without being responsible—or even partially responsible—for an accident. But last month, in Mountain View, California, one of the tech-giant’s autonomous Lexus SUVs scraped the side of a bus while travelling at two miles per hour. It had been trying to merge into the bus’s lane in order to avoid sandbags on the road ahead.
The crash raises many philosophical questions—as does the prospect of driverless cars more generally. Will we necessarily encounter problems when introducing a strictly rule-governed, logical machine into a human domain? Or should we look at such crashes as mere lessons to be learnt from; not signifiers of a deep-lying compatibility issue? I picked the brains of AC Grayling, philosopher and Master of New College of Humanities, who shared his thoughts on this issue and others in light of the recent crash.
AC Grayling: We are already quite used to automatic vehicles like the Docklands Light Railway in London and driverless underground trains. We have already got automatic systems in place… What is new is having a robot car finding its way in amongst traffic and pedestrians. Because we’re at the relatively early stages of development it’s inevitable that there would be glitches—even serious ones, but we will learn, I suppose.
I don’t imagine that it’s beyond the wit of our electronic engineers and computer whizzes to create systems which would be probably on the whole and on average safer than human beings are. Everyone worries about the fact that, A) human fallibility would be built into these systems and B) somehow the thought of automatic systems which are sentient, unreflective, and have no sense of compassion or concern for human beings are malevolent. But they aren’t. They’re neutral, and if properly designed, constructed and managed they should work OK, and probably be safer than human systems.
Alex Dean: OK, it’s not that the technological and human domains are necessarily incompatible, you think. Rather, it’s a contingent…