Aggregating the views of 2,778 top AI researchers, a survey released in January puts “the chance of unaided machines outperforming humans in every possible task” at 10 per cent by 2027 and 50 per cent by 2047—if science continues unabated. This stunning forecast underscores urgent questions about how best to regulate AI-driven job automation and manage its destabilising effects. But there is a different, more philosophical question worth asking: suppose we could magically achieve a seamless transition to the hyper-automated future that AI seems to promise, avoiding all harm and disruption in the process—is this a world we should want to live in? Is there value in humans performing the tasks that make societies function, many of which may soon be done better and more efficiently by machines?
There are plenty of reasons to worry about a world dominated by AI, but often overlooked is the dangerous way it confuses us about the unique nature and value of human intelligence. This confusion is not only a problem in some hypothetical, dystopian future. It has begun to trip us up here and now. Philosophers, neuroscientists and other experts disagree about how to characterise the distinguishing features of human intelligence, but most see a fundamental difference in kind between it and its artificial counterpart. Take the wonder of so-called “machine learning” that is ChatGPT. By identifying ever more complex patterns in vast sets of data, it is becoming increasingly adept at producing statistically probable outputs in response to human cues, thereby seeming to mimic human thought and language. This is not at all what happens when humans think or come up with intelligent explanations of the world.
As Noam Chomsky and co-authors put it in an opinion essay in the New York Times last year, machine-learning programs “are stuck in a prehuman or nonhuman phase of cognitive evolution”. Human intelligence is not just about description and prediction based on existing data. It is about explanation. This requires understanding why something is the case, which, in turn, involves us thinking counterfactually about what could and could not possibly be the case. In exercising our intelligence, we seek answers to questions that we ourselves have posed, questions that originate in the things we care about. None of this is true of AI in its present form.
There is another vital distinction between artificial and human intelligence that is just plain common sense. Although we may be awed by the human ingenuity that built AI, we value it for the impressive outputs it generates and the speed at which it does so. The processes by which it accomplishes these things will remain objects of ignorance and indifference for most people.
The contrast with human intelligence could not be starker. Unlike AI, human intelligence is brought about and expressed through human learning, something we rightly value quite apart from the outputs it eventually allows us to exploit. Put differently, it is not just the possession of knowledge that is valuable to us; rather, its hard-earned acquisition—through learning—is what makes us the free and civilisation-creating beings that we are.
By providing shortcuts to knowledge, AI will enfeeble our capacity to learn
The luminaries of the Enlightenment understood this well, which is why they put such a premium on education. Kant and Mill emphasised that the acquisition of knowledge involves developing a capacity to think for ourselves. This, in turn, is integral to human liberation in the broadest sense: a dynamic process of becoming that cannot be separated from the moral development of individuals and the advancement of society as a whole.
Consider the virtues that human learning instils: not just curiosity and a respect for knowledge, but also the tolerance we need to cope with frustration and uncertainty. Through learning we become skilled at argument, able to assess evidence, draw reasonable conclusions, critically examine our prejudices and collaborate with those who think differently than we do. We open ourselves to new and unforeseen possibilities, we test ideas and come to appreciate all that we do not yet know. For these reasons and more, learning is both humbling and ennobling. It is the source of human character.
A world dominated by AI, on the other hand, tempts us to take a merely instrumental view of intelligence, cut off from learning and its value—a world in which human intelligence will easily come to seem obsolete. After all, why bother with the painstaking process of figuring something out when AI gives us a much faster route to an answer? A society devoted to finding ever more efficient shortcuts to knowledge is one that will fundamentally enfeeble our capacity to learn, heralding danger of every imaginable kind. The gravest dystopian scenario we face isn’t rogue AI, nor intelligent machines stealing our jobs, but a society that lacks the virtues only human learning can impart.
Write to Sasha
Each month Sasha Mudd will offer a philosophical view on current events.
Email firstname.lastname@example.org with your suggested topics, including “Philosopher-at-large” in the subject line