Lagging behind in AI? Here's one of the EU's fightback plans
Updated 11:31, 10-Apr-2019
By Pan Zhaoyi

An industry report published by the McKinsey Global Institute last month revealed the European Union's emerging weakness in the digital era, signaling an urgency to top European officials to bridge the gap.

The report shows that Europe may risk falling further behind the United States and China, global leaders in AI, without faster and more comprehensive engagement in AI.

Moreover, the field competing in the AI race is expanding, with countries including Canada, Japan and South Korea making strides.

At a time when new digital technologies, such as artificial intelligence (AI), are increasingly being adopted, EU is in urgent need to roll out their fightback-plans for a successful digital transformation.

VCG Photo

VCG Photo

Pilot ethical rules to boost AI development

AI has been used in a wide range of sectors from healthcare, energy consumption, car safety, to climate change, financial risk management, as well as cybersecurity threat detection over the years.

The frontier technology brings us not only benefits but also new challenges and concerns.

Like in the medical industry, AI is getting increasingly sophisticated at doing what humans do, but in a more efficient, accurate and inexpensive way.

According to the American Cancer Society, a high proportion of mammograms yield false results, leading to 1:2 healthy women being told they have cancer. The use of AI is enabling review and translation of mammograms 30 times faster with 99 percent accuracy, reducing the need for unnecessary biopsies.

But what if the one percent happens? What if misdiagnosis leads to a medical accident? Then who will be held accountable for the outcomes -- the technology provider, the hospital or the physician?

Moreover, the wave of autonomous driving powered by AI has become irresistible. By 2025, the car market for partially self-driving vehicles is expected to be at 36 billion U.S. dollars, according to data from Statista

Similar concerns plague the auto industry as well. Looking back at the fatal Tesla car crash in 2016, the car owner was reportedly killed when the car was running on autopilot. Who should be blamed -- the car owner or the manufacturer?

These are just two cases in point when we talk about AI ethics. The list can go on and on. It forces people and academics in the industry, as well as policymakers across the world, to think about the issue.

EU's top officials have been struggling with supporters and opponents over cutting-edge technology development for years. But undoubtedly, they believe AI is the future. The problem is how to build a human-centric AI that people can trust.

The EU gave Google 90 days to end 'illegal' practices surrounding its Android operating system or face further fines, after slapping a record 4.34 billion euro (five billion U.S. dollars) anti-trust penalty on the US tech giant. /VCG Photo

The EU gave Google 90 days to end 'illegal' practices surrounding its Android operating system or face further fines, after slapping a record 4.34 billion euro (five billion U.S. dollars) anti-trust penalty on the US tech giant. /VCG Photo

EU's plan 

On Monday, they unveiled ethics guidelines under its AI strategy of April 2018, aiming to boost trust in AI while trying to clear some of the concerns over certain issues.

The guidelines can be boiled down to "seven key requirements" for trustworthy AI:

- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

- Transparency: The traceability of AI systems should be ensured.

- Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

- Societal and environmental well-being: AI systems should be used to enhance positive social change and improve sustainability and ecological responsibility.

- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

This summer, the Commission will launch a pilot phase involving stakeholders in different sectors and gather feedback, then evaluate the outcome for their next steps in early 2020.

Photo from Google website

Photo from Google website

Last week, Google's farce of dissolving its week-long AI ethics lab had once again attracted public attention.

The Advanced Technology External Advisory Council, supposed to supervise its AI development plan and solve some of Google's most complex challenges that arise under our AI Principles like facial recognition and fairness in machine learning, was declared to be aborted after dissension over members' controversial ideological stances.

According to Engadget, the group has a long history of climate change denial and anti-immigrant sentiments. The head of the foundation James has espoused those views and is very vocally anti-trans and anti-equality.