Will artificial intelligence make humans more like gorillas?
Updated 14:05, 13-Jun-2019
By Simon Morris
[]

When researchers at the Oxford Internet Institute studied bots that edited the online encyclopedia Wikipedia they discovered many of them kept overwriting each other's corrections, sometimes for years on end. 

They wasted more effort than snarky human editors, who could drop their disagreements more readily.

Bots have been around almost since the birth of the world wide web in the early 1990s. They are one of its simplest life forms, its single-celled creatures, so their relentless tussles in the digital undergrowth may be just an amusing sideshow.

But at the other end of the scale, artificial intelligence (AI), fed on astronomical accumulations of data and exploding computer power, is evolving to the point where, we are warned, it could pose an existential threat.

The Oxford philosopher Nick Bostrom, for instance, suggests we could one day end up like gorillas, whose survival or extinction depends on decisions made by a more powerful intelligence – humanity – rather than on their own actions.

For us, that would only happen if AI crosses a critical threshold from being very good at specific narrow tasks, such as playing chess, to acquiring general, multipurpose intelligence like Homo Sapiens, which could then evolve extremely rapidly – possibly within days, believes Bostrom – into a "superintelligence."

But Bostrom's fears are not universally shared.

Professor Amitai Etzioni from George Washington University in the U.S. believes they should be set aside in favor of the benefits of AI in the here and now.

"People who raise all sorts of end-of-the-world doom scenarios about the machines coming to dominate us and outsmart us… they would delay, for this movie-like fantasy Hollywood kind of stuff, the enormous number of lives AI could save now just by making cars partly autonomous."

"Would I want to delay for one day introducing autonomous brakes because someone says that one day these machines will come to dominate me? Hell, no."

However, the story of the feuding bots shows the law of unintended consequences can operate even at a relatively simple level. The potential for unintended and undesirable outcomes grows as complexity increases.

So how do we ensure AI serves us, rather than the other way round?

How, for instance, do we ensure it doesn't embed bias against certain races, disadvantage women, or throw millions out of work with no thought for the social consequences?

One answer may be getting ethics built into AI design and regulation now, rather than waiting to see what technologies emerge.

Professor Mark Coeckelbergh, a moral philosopher at the University of Vienna, says transparency, explainability and accountability are essential.

"I think it’s already a problem for us now because even classic AI systems, people don't understand them.

"For example, a judge takes a decision based on AI that's not as smart as we think, but still based on a database of information about previous decisions. I think people have the right to know what’s happening."

Explaining the reasoning behind decisions is likely to become harder as the different branches of AI advance. "Deep Learning," for instance, doesn't follow a pre-programmed decision-tree that a person can trace.

"It works with different layers that simulate neurons," says Coeckelbergh.

"That makes it not possible to say how the machine makes each step, so there's only the outcome. That's a problem if the human then has to explain what’s happened, why the decision is taken."

There are problems with regulating AI, though. First, the implications of AI are global but regulation generally isn't. As Professor Martin Rees, one of UK's most eminent scientists, points out, if something can be done in AI, someone somewhere will do it.

China and the OECD have this year published sets of principles underpinning their approach to AI while the European Commission has published guidelines. However, all that is still a long way from global governance of a global issue.

Second, AI has the potential to achieve prodigious breakthroughs in science and transformative advances in productivity. As Etzioni fears, any over-regulation would risk stifling that.

Cecilia Bonefeld-Dahl, director general of Digital Europe, which represents 36,000 digital businesses, is keen to make sure the right balance is struck.

She believes, for instance, that the European General Data Protection Regulations introduced in 2018 slowed things down because businesses were not involved enough early on.

Digital regulation policy in general has to be much more agile, she told a public meeting in Berlin.

"Talk to the unions, talk to the companies, make sure you speak to the smallest companies to see what is the reaction so you... are already on the way, you change the mindset and the readiness to receive it so you don't lose momentum."

The European Commission's guidelines on AI ethics are designed to ensure AI produced in Europe is "trustworthy."

European policy makers believe, long-term, trustworthy AI will not only avoid machine domination, as Nick Bostrom fears, but will also give the EU a competitive advantage in a world which may have limitless quantities of data but a short supply of trust.

As Professor Coeckelbergh says: "There are machine-learning systems that are not transparent in terms of how they come to a decision and it's also technology that's particularly difficult to understand for the wider public, so I think if we are going, as citizens, as consumers, to be affected by these technologies, I think it's important to know what they're doing with us – and especially for people to take the responsibility of letting us know that they are using it because often we don’t know that they are using it."

(Top image via VCG)