It is both a threat and opportunityby / March 1, 2017 / Leave a comment
Developments in artificial intelligence are raising many fears about the future. While we are still a long way from general artificial intelligence in the style of sci-fi novels, the power of techniques such as machine learning, and deep learning, are enabling computers to replicate many “human” tasks in ways that they have never been able to do before.
Data and algorithms are already behind many decisions that have a real impact on people’s lives. From insurance to health to the news that we see on social media feeds, predictive models and machine learning are influencing and controlling our access to services and information today.
As data volumes and computing power continue to increase at high speed, the power of AI to make decisions and provide intelligent prompts is only going to get greater.
AI techniques will also have a major impact on jobs. In the profession of accountancy, we are well aware of the potential for machine learning to spread across a wide range of tasks done by accountants and finance professionals. Of course, accountancy has always been subject to technological pressures and accountants have generally adopted new technologies to improve the way they work. But new waves of AI could be game changing in this regard, taking over large amounts of expert decision-making and moving into areas which we have always considered to be dominated by “human judgement.”
So, is this a threat or an opportunity? Of course it is both—technology is neither inherently good nor bad, it depends on how we use it. And we do have choices here—on how we integrate considerations about control, ethics or equality into algorithmic decisions for example.
For example, how do we trade-off the highest levels of accuracy with appropriate levels of oversight and governance? We don’t know exactly how the most sophisticated models, such as deep learning, actually work, so we need to think about whether this really matters. In some cases, we just want the best answer and don’t really care how it was arrived at. But there are other times when we need to be able to explain decisions to ensure trust and accountability. In these cases, we might need to focus on better understanding of the algorithmic process rather than optimal decisions.
We need to understand better when algorithms are entrenching systemic bias and when they are over-coming human prejudices. We also need to think about the risks of personifying bots, and seeing and trusting them as if they were human. They are not human—they are tools, not agents—and we need to retain our critical facilities around them.
So we need to encourage debate across society to make sure that we use these technologies in ways that are ultimately beneficial and take account of many kinds of concerns. This means the technology industry must recognise the longer-term impacts of these developments and accept some responsibility for the wider social consequences of their work. It needs policy-makers to get on top of the agenda and start to think very seriously about the impact of AI across all policy areas, from skills to healthcare to regulatory frameworks. We cannot confine debate to traditional technology policy areas.
It also needs business and citizens more broadly to become more engaged in the discussions and recognise their stakes in defining the future. We have the opportunity to redesign many aspects of our world through very powerful technologies. We must focus on doing it in ways that we trust and that enhance rather than damage lives.