The video shows hands rotating a 3D printed turtle for an image classification system, and the results are what one might expect: terrapin, mud turtle, loggerhead. But then a new turtle with a different texture is presented. This time the results are more surprising: the algorithm consistently classifies the image as a rifle, not a turtle.
This demonstration was part of an experiment conducted last year by MIT's Computer Science and Artificial Intelligence Lab. Anish Athalye, a PhD candidate at MIT and author of a paper based on this research, said at the time that these results have concerning implications for the technology underlying many of the advancements being made in AI.
“Our robust adversarial examples, of which the rifle/turtle is one demonstration, show that adversarial examples can be made to reliably fool ML models in the physical world, contributing to evidence indicating that real-world machine learning systems may be at risk.” Athalye told GCN.
And this is not the only example of this kind of adversarial AI being demonstrated in a laboratory setting.
In 2016, researchers at Carnegie Mellon University showed how a pair of specially designed glasses could trick facial recognition. More recently, Google researchers created a sticker that could trick image recognition systems into classifying any picture where the sticker appeared as a toaster.
These examples all raise a serious question: How can developers secure applications that rely on machine learning or other techniques that learn from -- and make decisions based on -- historical data? Multiple experts told GCN there are not many best practices at this point for securing AI, but it is now the focus of significant research.
Lynne Parker, the assistant director of artificial intelligence in the White House Office of Science and Technology Policy, spoke about these challenges and said it is especially a concern for leaders at the Department of Defense.
“They want to have typically provable systems -- control systems -- but … the current approach to proving that systems are accurate does not apply to systems that learn,” Parker told GCN.
Anton Chuvakin, a security and risk management analyst at Gartner, echoed this assessment, saying AI systems lack the transparency that make it possible to trust their output.
“How do I know the system has not been corrupted? How do I know it has not been affected by an attacker?” Chuvakin asked. “There is no real clear best practice" for confirming why a system is providing a particular response.
These are concerns that are top of mind for people beginning to put AI systems in place, according to a recent survey by Deloitte of 1,100 IT executives who are currently working with the technology. It found security to be respondents' top concern.
Jeff Loucks, and author of the report and the executive director of Deloitte’s Center for Technology, Media and Telecommunications, said this concern can be broken down to specific areas:
The survey found these concerns are causing some execs to push pause on their AI projects, some of which may have started as pilots that did not incorporate cybersecurity from the beginning of the process.
Government has been hesitant to adopt AI due to these concerns, Parker said. It is one research area the Pentagon's recently created Joint Artificial Intelligence Center plans to look at, she said, to “get at the issue of how do we leverage the AI advancements that have been made across all of DOD" and turn them into useful applications.
“This is a problem,” she said. “It is a research challenge because of the nature of how these AI systems work.”
The problem isn’t just making sure an algorithm can tell a turtle from a gun. Many AI systems are based on proprietary or sensitive training data. The ability to understand an algorithm’s underlying data is a big concern and challenge in the current AI space, according to Josh Elliot, the director of artificial intelligence at Booz Allen Hamilton.
“If you think about the ability to reverse engineer an algorithm … you’re effectively stealing that application and you’re displacing the competitive advantage that company [that developed it] may have in the marketplace," Elliot said.
Elliot’s colleague, Greg McCullough, who leads the development of cybersecurity tools that leverage AI for Booz Allen Hamilton, said the company has used these reverse engineering techniques to understand the dynamic domain generation algorithms that sophisticated hackers use. DGAs generate domains for malware to use when penetrating a network, dynamically changing the domain to allow it to slip past traditional roadblocks like firewalls.
McCullough said Booz Allen has been successful in using convolutional neural networks trained on domains generated by DGAs to predict what other domains might look like – thus reversing-engineering the DGA.
But these techniques for enhancing security can also be used by bad actors for an attack -- training an AI application to find the kind of attacks a target’s defenses don’t catch, he said.
“Nation-states have invested heavily in this area over the last few years, and now it's been commoditized and it's really causing quite a problem for every commercial organization that we’ve seen so far,” Elliot said.