In a technology experience zone at the 2019 Summer Davos in Dalian, a system that uses artificial intelligence (AI) to analyze a person's personalities attracted a crowd.
The system uses face-scanning technology to tell a person's attributes, including their level of kindness, attractiveness and responsibility.
According to a project member, the system was developed to engage people's thoughts on the ethics of AI, including AI bias. As a global concern, the ethics of AI was also raised during this year's Summer Davos.
An AI-powered system at 2019 Summer Davos that uses face-scanning technology to analyze a person's personality. /CGTN Photo
Eddan Katz, project lead on AI at the World Economic Forum, told CGTN that AI bias needs to be thought through "earlier rather than later."
He said the data AI tools receive is essential for its evolution.
"That's where it learns from; that's where it gains its strength," he said. "Making sure that there is a diversity in the inputs into the process is actually core to a well-functioning algorithm."
"There is a window now in which we can affect these norms to make sure that diversity is actually a fundamental value going into how you develop this technology," Katz told CGTN.
He said if we don't take action soon, some of the existing bias can be baked into AI systems and persist much longer then it would have otherwise.
Eddan Katz, project lead on AI at the World Economic Forum, answers questions from CGTN. /CGTN Photo
When it comes to whether or not to set regulations to monitor AI, Katz thinks there is room for regulation in some contexts.
"But it has to do with reasons for which we would have laws anyhow, including data protection, including intellectual property, accountability. Those are regulations we would have regardless of whether or not we have AI," he said, adding stakeholders need to make sure that regulation doesn't lead before the development of the technology.
"The thing that most concerning when we start thinking about AI is the fact that decisions are being made more remote from human beings," said Katz. "We are not comfortable with, yet, how decisions will be made on our behalf in an automated way."
(Video by CGTN's Qi Jianqiang)