With a 3D printed turtle or baseball, say MIT researchers, you have the tools you need to spoof a neural network in the physical world.
A group of MIT researchers have developed a method to reliably and consistently trick a neural network into misidentifying an object. And they did it using a 3D printed turtle.
What’s the point of such a thing? Because one application for Artificial Intelligence that’s growing in importance is being able to recognize things. Whether it’s detecting a face in a crowd or a self-driving car navigating through traffic, AI is doing the heavy lifting.
So how easy is it to fool an AI into “seeing” something it isn’t? Pretty easy, it turns out.
The technique employed is an evolution of something called an “adversarial image”. This is a picture that uses specific patterns to fool the AI. What the image is of isn’t really significant; of greater concern is the pattern contained in the image.
This can be overlaid as an almost invisible layer onto an existing picture, so that a feline tabby could pass for a bowl of guacamole.
The trouble is, these adversarial images don’t always work as intended. Alterations like cropping, enlarging or angling can ‘weaken’ it, so that the AI will see through the tissue of lies to make a positive detection.
The boffins at MIT set themselves the challenge of creating an adversarial image that would fool an AI every time. In the event, they succeeded admirably.
Not only was the team able to generate an algorithm that would reliably fool an AI using adversarial images; it could also be applied to both two-dimensional images and 3D printing. These images and physical objects will trick an AI, regardless of the angle of the object.
Most significantly, they fooled Google’s Inception v3 AI into thinking that a 3D printed turtle was a rifle. They also 3D printed a baseball that image recognition software thinks is an espresso. A baseball is less harmful than a gun, but the principle remains the same. Read the full paper on their results here.
These findings matter because Google isn’t the only AI susceptible to an adversarial image; it’s a problem that could afflict all neural networks. By figuring out how people can fool these systems — and demonstrating that it can be done relatively easily — the researchers are making AI recognition systems safer and more accurate.
Via: Popular Mechanics
License: The text of "Neural Network Tricked into Thinking 3D Printed Turtle is a Rifle" by All3DP is licensed under a Creative Commons Attribution 4.0 International License.
Subscribe to updates from All3DP
You are subscribed to updates from All3DP
You can’t subscribe to updates from All3DP. Learn more…