Artificial intelligence must avoid a backlash AI| Artificial intelligence
Barely a month seems to pass without a technology company announcing yet another breakthrough in artificial intelligence. Like many of those garnering the most attention, Alphabet subsidiary DeepMind’s latest success — in diagnosing eye disease more accurately than doctors — represented a triumph of AI over human capabilities. Yet the kind of opportunities presented by this pioneering work are at risk of being lost due to increasing concerns, both real and unwarranted, about the disruption they may cause. Less hyperbolic and more effective public engagement is needed.
Prescient and agile regulation has a role to play in managing the tension between enabling innovation to improve lives and safeguarding the public interest. Getting this right will require much better communication by the scientific community and regulatory authorities. Busy as they are, governments should be dedicating more resources to engaging the public about the advantages that artificial intelligence will bring about.
Jim Al-Khalili, incoming president of the British Science Association, has warned about the risks of failing to do so. Prof Al-Khalili, speaking ahead of the British Science Festival in Hull this week, foresaw a potential public backlash, similar to that provoked by genetically modified crops. Leaving the public behind could abandon them on the other side of the fence.
Many of AI’s effects will be positive. Disease diagnosis will become faster and more reliable, allowing people to live longer, healthier lives. Driverless vehicles, and smarter public transport and public space design, will improve urban quality of life. Simultaneous translation software will increase opportunities for global collaboration by removing language barriers.
Given that some of these innovations will touch the fabric of our lives, sentiment may run high. Failure to communicate the advantages could produce widespread public hostility and over-regulation. This could stymie technological progress, some of the fruits of which could deliver great economic benefits — McKinsey estimates that AI could add $13tn a year to the global economy by 2030.
The complexity of the science of artificial intelligence means it will be difficult to explain it clearly. It is nevertheless incumbent on governments to insist that the companies designing the software articulate how it works before it is in widespread use.
Unlike foods made from genetically modified crops that people can choose not to buy, opting out of AI will be harder, in some cases impossible. AI is the ultimate “invisible hand”, one that is already at work in almost every aspect of our lives.
Prof Al-Khalili is not a lone voice in the wilderness. Open AI and The Partnership on AI are among several initiatives that are seeking to raise public awareness. This year the UK government set up an AI council to champion responsible adoption of the technology. To date, these efforts have had little impact outside the tech community. Those involved would do well to look at how the health research community is tackling public engagement. Almost all of the £1bn annual investment and research grants offered by the Wellcome Trust come with an expectation that public engagement is integrated into a project’s strategy.
Educating the public about the sector, let alone regulating it, is a huge endeavour. But it is commensurate with the magnitude of the impact of AI. It is too soon to tell what the full consequences of artificial intelligence’s role in decision-making will be. It is not premature to start preparing for them.