Malware controlled by artificial intelligence could create more convincing spam, avoid security detection, and better adapt itself to each target, says a new report from Malwarebytes.
Artificial intelligence (AI) is already playing a role in combatting malware and other threats. Through machine learning, AI can now do more than just add malware samples to security software. It can also detect future versions and similar variants of the same malware. But what if the very AI that helps organizations fight these threats was co-opted by cybercriminals? What if malware became smarter and tougher and almost undetectable through AI? Those are the questions explored in a report released Wednesday by Malwarebytes.
Entitled When Artificial Intelligence Goes Awry: Separating Science Fiction from Fact, the report looks at the current landscape of AI and security software, and envisions a near future in which AI-enhanced malware would pose a more dramatic and powerful threat. Though AI is playing a beneficial role in malware detection, it comes with its own weaknesses. For example, AI constantly needs tweaking, explains Malwarebytes, so it can detect as much legitimate malware as possible without triggering too many false positives. But it's this very weakness that malware authors can exploit.
By determining what AI-powered security software looks for when trying to identify malware, cybercriminals can adapt their payloads to more easily avoid detection. Malware writers could even dirty their samples to trick the AI into flagging legitimate files as malware, thus triggering a lot of false positives.
Cybercriminals and bad actors could exploit AI in other ways without having to build it into their own malware. Using machine learning, they could solve CAPTCHAs to sneak past this type of authentication method. They could use AI to scan social media to find the right people to target with spear phishing campaigns. And they could create more convincing spam better customized toward the potential victim.
Though no examples of AI-enabled malware are currently floating about in the wild, the report envisions several ways in which such malware could pose a threat in the near future.
Worms could avoid detection by learning what tripped them up in the past and then avoiding that type of behavior. For example, if AI determines that the code behind a worm triggered its detection, the authors of that worm could easily rewrite the code. If the worm's very behavior foiled its success, the authors could add a type of randomness to get past pattern-matching rules.
Trojans could create new versions of themselves to trick detection methods. The Swizzor trojan, as mentioned in the report, already employs a similar strategy by continually downloading new versions of malicious code.
IBM created its DeepLocker attack tool specifically to show how existing AI technologies could enhance malware attacks. By adopting AI, DeepLocker uses stealth to achieve its goal. The tool masquerades as video conferencing software and lies in hiding until it identifies its intended victim through facial and voice recognition and other attributes. Then it springs its attack. Cybercriminals could outfit their own malware with similar intelligence to infect systems without being detected.
Another area already being widely exploited is social engineering. Fake news has created havoc around the world, blamed for influencing the 2016 US election, swaying the UK's decision to leave the European Union, and triggering murder sprees in India. Discussed in a Malwarebytes blog post from 2018, DeepFakes is a term that describes a method of creating fake videos of real people by using AI to match someone's mouth and face to words spoken by someone else. In a scenario from the report, DeepFakes could be used to create a video message of your boss mouthing someone else's words telling you to wire cash to an account for a business trip. As such, this type of technology could lead to highly convincing spear phishing attacks.
In short, AI in the hands of cybercriminals could pave the way for malware that's harder to detect, more targeted threats and victims, more convincing spam and phishing attacks, more effective ways of propagation, and more convincing fake news.
How far away are we from all this? Malwarebytes expects to see threats that use AI against itself for malicious purposes surface in the next one to three years, but only in minimal ways. The question now is: What can we do to guard against this type of threat?
Being proactive is one piece of advice shared in the report. Consumers, organizations, and employees must all understand the threat landscape and prepare themselves for AI-enabled malware.
Governments need to act to defend themselves against the weaponization of AI. Most governments are not yet prepared for this, says the report. Only a couple of countries, including the US, have established some type of AI strategy in this regard. But a more robust plan with standards and regulation must be developed.
Security vendors must also look to artificial intelligence and machine learning to protect their own products by shoring up holes cybercriminals can exploit.
"Fast-forward 10 years, however, and if we're not proactive, we may be left in the dust," argues the report. "Best to develop this technology responsibly, with a 360-degree view on how it can be used for both good and evil, than to let it steamroll over us and move beyond our reach."