TECH & SCI

Researchers aim to identify AI's decision-making process

2017-06-08 19:10 GMT+8
Editor Wang Xueying

If how an artificial intelligence (AI) robot makes decisions could be fully explained, would it make us have more trust in them?

With a 6.5 million US dollar grant from the Defense Advanced Research Projects Agency (DARPA), eight computer science professors from Oregon State University (OSU) have been tasked to develop a framework to establish how AI robots “think”, said OSU researchers.

"Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust and having an explanation capability is one important way of building trust," said one of the team.

Potential dangers arise because some developers don't even fully understand the AI system they've developed. /VCG Photo

AI has exploded in recent years. At present, more and more autonomous systems can perceive, learn, decide and act on their own, stemming from the successful development of deep neural networks, a branch of AI technology. 

The problem, however, is that despite extensive study, humans still don't fully understand how an AI computer program learns independently.

Some people point out that potential dangers arise from depending on an AI system which may not even be fully understood by its developers. This is why the OSU research team is trying to find out more about AI's decision-making process. 

More than "What is the meaning of life?", a pressing question these days is "How does AI think?" /VCG Photo

According to the OSU researchers, the team requires expertise in a number of research fields. Besides researchers in AI and machine learning, experts in computer vision, human-computer interaction, natural language processing and programming languages are also participating in the study.

To begin developing the system, the team will use real-time strategy games, like StarCraft, a staple of competitive electronic gaming, to train AI players that will explain how their decisions differ from those of humans.

Later, they will move on to other fields to test the system, including robotics and unmanned aerial vehicles, said Alan Fern, principal investigator for the grant and associate director of the OSU College of Engineering's recently established Collaborative Robotics and Intelligent Systems Institute, adding that research is crucial to the advancement of autonomous and semi-autonomous intelligent systems.

Source: Xinhua

+1
Copyright © 2017 
OUR APPS