On 14th February, one of Google’s driverless cars crashed. Up to that point, its cars had driven more than 1.4 million miles without being responsible—or even partially responsible—for an accident. But last month, in Mountain View, California, one of the tech-giant’s autonomous Lexus SUVs scraped the side of a bus while travelling at two miles per hour. It had been trying to merge into the bus’s lane in order to avoid sandbags on the road ahead.
The crash raises many philosophical questions—as does the prospect of driverless cars more generally. Will we necessarily encounter problems when introducing a strictly rule-governed, logical machine into a human domain? Or should we look at such crashes as mere lessons to be learnt from; not signifiers of a deep-lying compatibility issue? I picked the brains of AC Grayling, philosopher and Master of New College of Humanities, who shared his thoughts on this issue and others in light of the recent crash.
AC Grayling: We are already quite used to automatic vehicles like the Docklands Light Railway in London and driverless underground trains. We have already got automatic systems in place… What is new is having a robot car finding its way in amongst traffic and pedestrians. Because we’re at the relatively early stages of development it’s inevitable that there would be glitches—even serious ones, but we will learn, I suppose.