Turtle or rifle? Google AI tricked by MIT students
By Guo Meiping
["china"]
From facial recognition technology on smartphones to self-driving systems on vehicles, artificial intelligence (AI) has permeated into many areas of human life. But, are these smart systems reliable?
A Massachusetts Institute of Technology (MIT)-based team called Labsix, has fooled Google’s AI InceptionV3 image classifier with an “adversarial object” – a 3D-printed turtle identified by the system as a rifle.
 The team tricked Google’s InceptionV3 to classify the tebby cat as guacamole. /Photo via Labsix

 The team tricked Google’s InceptionV3 to classify the tebby cat as guacamole. /Photo via Labsix

“Adversarial objects” can be applied to both 2D pictures and 3D models. By using a specific pattern, they can trick AI systems to make an incorrect recognition.
In a video published by Labsix, a normal 3D-printed turtle was always classified correctly by InceptionV3. However, the system was fooled by the "adversarial" turtle model and misjudged it as a rifle from every angle.
“We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation,” the team said on its website on Tuesday.
Besides the turtle, the team also printed a 3D baseball that was identified by InceptionV3 as an espresso at every angle. /Photo via Labsix

Besides the turtle, the team also printed a 3D baseball that was identified by InceptionV3 as an espresso at every angle. /Photo via Labsix

“Our process works for arbitrary 3D models – not just turtles!” the team added.
Besides the turtle, Labsix also printed a 3D baseball that was identified by InceptionV3 as an espresso at every angle.
The result triggers concerns, especially when it comes to AI programs related to classification. Imagine this, someone can simply trick your smartphone into unlocking or make payments by using adversarial objects.
10849km