Artificial intelligence risks GM-style public backlash, experts warn

Artificial intelligence risks GM-style public backlash, experts warn

Researchers say social, ethical and political concerns are mounting and greater oversight is urgently needed

The emerging field of artificial intelligence (AI) risks provoking a public backlash as it increasingly falls into private hands, threatens people’s jobs, and operates without effective oversight or regulatory control, leading experts in the technology warn.

At the start of a new Guardian series on AI, experts in the field highlight the huge potential for the technology, which is already speeding up scientific and medical research, making cities run more smoothly, and making businesses more efficient.

But for all the promise of an AI revolution, there are mounting social, ethical and political concerns about the technology being developed without sufficient oversight from regulators, legislators and governments. Researchers told the Guardian that:

  • The benefits of AI might be lost to a GM-style backlash.
  • A brain drain to the private sector is harming universities.
  • Expertise and wealth are being concentrated in a handful of firms.
  • The field has a huge diversity problem.

In October, Dame Wendy Hall, professor of computer science at Southampton University, co-chaired an independent review on the British AI industry. The report found that AI had the potential to add £630bn to the economy by 2035. But to reap the rewards, the technology must benefit society, she said.

“AI will affect every aspect of our infrastructure and we have to make sure that it benefits us,” she said. “We have to think about all the issues. When machines can learn and do things for themselves, what are the dangers for us as a society? It’s important because the nations that grasp the issues will be the winners in the next industrial revolution.”

Today, responsibility for developing safe and ethical AI lies almost exclusively with the companies that build them. There are no testing standards, no requirement for AIs to explain their decisions, and no organisation equipped to monitor and investigate any bad decisions or accidents that happen.

“We need to have strong independent organisations, along with dedicated experts and well-informed researchers, that can act as watchdogs and hold the major firms accountable to high standards,” said Kate Crawford, co-director of the AI Now Institute at New York University. “These systems are becoming the new infrastructure. It is crucial that they are both safe and fair.”

Many modern AIs learn to make decisions by being trained on massive datasets. But if the data itself contains biases, these can be inherited and repeated by the AI.

Earlier this year, an AI that computers use to interpret language was found to display gender and racial biases. Another used for image recognition categorised cooks as women, even when handed images of balding men. A host of others, including tools used in policing and prisoner risk assessment, have been shown to discriminate against black people.

The industry’s serious diversity problem is partly to blame for AIs that discriminate against women and minorities. At Google and Facebook, four in five of all technical hires are men. The white male dominance of the field has led to health apps that only cater for male bodies, photo services that labelled black people as gorillas and voice recognition systems that did not detect women’s voices. “Software should be designed by a diverse workforce, not your average white male, because we’re all going to be users,” said Hall.

Poorly tested or implemented AIs are another concern. Last year, a driver in the US died when the autopilot on his Tesla Model S failed to see a truck crossing the highway. An investigation into the fatal crash by the US National Transportation Safety Board criticised Tesla for releasing an autopilot system that lacked sufficient safeguards. The company’s CEO, Elon Musk, is one of the most vocal advocates of AI safety and regulation.

Yet more concerns exist over the use of AI-powered systems to manipulate people, with serious questions now being asked about uses of social media in the run-up to Britain’s EU referendum and the 2016 US election. “There’s a technology arms race going on to see who can influence voters,” said Toby Walsh professor of artificial intelligence at the University of New South Wales and author of a recent book on AI called Android Dreams.

“We have rules on the limits of what you can spend to influence people to vote in particular ways, and I think we’re going to have to have limits on how much technology you can use to influence people.”

Even at a smaller scale, manipulation could create problems. “On a day to day basis our lives are being, to some extent, manipulated by AI solutions,” said Sir Mark Walport, the government’s former chief scientist, who now leads UK Research and Innovation, the country’s new super-research council. “There comes a point at which, if organisations behave in a manner that upsets large swaths of the public, it could cause a backlash.”

Leading AI researchers have expressed similar concerns to the House of Lords AI committee, which is holding an inquiry into the economic, ethical and social implications of artificial intelligence. Evidence submitted by Imperial College London, one of the major universities for AI research, warns that insufficient regulation of the technology “could lead to societal backlash, not dissimilar to that seen with genetically modified food, should serious accidents occur or processes become out of control”.

Scientists at University College London share the concern about an anti-GM-style backlash, telling peers in their evidence: “If a number of AI examples developed badly, there could be considerable public backlash, as happened with genetically modified organisms.”

But the greatest impact on society may be AIs that work well, scientists told the Guardian. The Bank of England’s chief economist has warned that 15m UK jobs could be automated by 2035, meaning large scale re-training will be needed to avoid a sharp spike in unemployment. The short-term disruption could spark civil unrest, according to Maja Pantic, professor of affective and behavioural computing at Imperial, as could rising inequality driven by AI profits flowing to a handful of multinational companies.

Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence, said that although technology often benefited society, it did not always do so equitably. “Recent technological advances have been leading to a lot more concentration of wealth,” he said. “I certainly do worry about the effects of AI technologies on wealth concentration and inequality, and how to make the benefits more inclusive.”

The explosion of AI research in industry has driven intense demand for qualified scientists. At British universities, PhD students and postdoctoral researchers are courted by tech firms offering salaries two to five times those paid in academia. While some institutions are coping with the hiring frenzy, others are not. In departments where demand is most intense, senior academics fear they will lose a generation of talent who would traditionally drive research and teach future students.

According to Pantic, the best talent from academia is being sucked up by four major AI firms, Apple, Amazon, Google and Facebook. She said the situation could lead to 90% of innovation being controlled by the companies, shifting the balance of power from states to a handful of multinational companies.

Walport, who oversees almost all public funding of UK science research, is cautious about regulating AI for fear of hampering research. Instead he believes AI tools should be carefully monitored once they are put to use so that any problems can be picked up early.

“If you don’t continuously monitor, you’re in danger of missing things when they go wrong,” he said. “In the new world, we should surely be working towards continuous, real-time monitoring so that one can see if anything untoward or unpredictable is happening as early as possible.”

That might be part of the answer, according to Robert Fisher, professor of computer vision at Edinburgh University. “In theory companies are supposed to have liability, but we’re in a grey area where they could say their product worked just fine, but it was the cloud that made a mistake, or the telecoms provider, and they all disagree as to who is liable,” he said.

“We are clearly in brand new territory. AI allows us to leverage our intellectual power, so we try to do more ambitious things,” he said. “And that means we can have more wide-ranging disasters.”

Source: The Guardian

Featured Videos

Leave a Comment

You must be logged in to post a comment.

Latest Posts

Top Authors

Most Commented

Around The Web