Thought leadership. Threat analysis. Cybersecurity news and alerts.
AI-Powered Cyberthreats Coming Our Way
Researchers at IBM recently developed a malicious software (malware) called “DeepLocker” as a proof-of-concept to raise awareness that AI-powered cyberthreats are coming our way.
What Is DeepLocker?
DeepLocker is a malware that uses as its secret weapon the infamous WannaCry – a malware that locked more than 300,000 computers in over 150 countries in less than 24 hours on May 12, 2017 and demanded ransom payment from victims for unlocking the computers.
DeepLocker hides the notorious WannaCry in a seemingly innocent video conference app to evade anti-virus and malware scanners. The video conference app operates as a normal video conference software until such time that it detects its target. Once it detects its target it unleashes this hidden cyberweapon.
IBM researchers trained the embedded AI model in DeepLocker to recognize the face of a target individual to act as a triggering condition to unlock WannaCry. The face of the target is, therefore, used as the preprogrammed key to unlock WannaCry.
Once the target sits in front of the computer and uses the malicious video conference app, the camera then feeds the app with the target’s face, and WannaCry will then be secretly executed, locking the victim’s computer and asking the victim to pay ransom to unlock the compromised computer.
DeepLocker is also designed in such a way that other malware, not just WannaCry can be embedded in it. Different AI models, including voice recognition, geolocation and system-level features can also be embedded in this IBM proof-of-concept malware.
Marc Ph. Stoecklin, Principal Research Scientist and Manager of the Cognitive Cybersecurity Intelligence (CCSI) group at the IBM T.J. Watson Research Center, in a blog postsaid, DeepLocker is similar to a sniper attack – a marked contrast to the traditional malware the employs “spray and pray” approach.
Stoecklin added that DeepLocker is good at evasion as it allows 3 layers of attack concealment. “That is, given a DeepLocker AI model alone, it is extremely difficult for malware analysts to figure out what class of target it is looking for,” Stoecklin said. “Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload?”
There’s no evidence yet that a class of malware similar to DeepLocker is out in the wild. It won’t surprise the community though if this type of malware were already being deployed in the wild. The likelihood of AI-powered malware being deployed in the wild is high as the type of malware used as secret weapon by DeepLocker like WannaCry is publicly available. WannaCry, together with other spying tools, believed to be created by the US National Security Agency (NSA) was leaked to the public more than a year ago. AI models, including facial and voice recognition, are also publicly available.
Trustwaverecently released an open-sourced tool called “Social Mapper”, a tool that uses facial recognition to match social media profiles across a number of different sites on a large scale.
This tool automates the process of searching for names and pictures of individuals in popular social media sites, such as LinkedIn, Facebook, Twitter, Google+, Instagram, VKontakte, Weibo and Douban. After scanning the internet, Social Mapper then spits out a report with links to targets’ profile pages as well as photos of the targets.
Trustwave’s Jacob Wilkins said that Social Mapper is meant for penetration testers and red teamers. "Once social mapper has finished running and you've collected the reports, what you do then is only limited by your imagination …,” Wilkins said.
For target lists of 1000 individuals, Wilkins said that it can take more than 15 hours and can eat up large amount of bandwidth.
Getting Ready for AI-Powered Cyberthreats
Even as cybercriminals are learning the ways of AI to their advantage or weaponize it, cybersecurity professionals, on the other hand, are leveraging the power of artificial intelligence for cybersecurity.
Once such approach is IBM’s proof-of-concept malware, believing that similar to the medical field, examining the virus is necessary to create the vaccine.
AI-powered cyberthreats present a new challenge to cybersecurity professionals. According to IBM’s Stoecklin, AI-powered cyberthreats are characterized by increased evasiveness against rule-based security tools as AI can learn the rules and evade them. AI allows new scales and speeds of acting autonomously and adaptively, Stoecklin added.
To fight against AI-powered threats, Stoecklin said that cybersecurity professionals should focus on the following:
There are existing AI tools that cybersecurity professionals can depend upon. An example of an AI tool is Imperva’s Attack Analytics. This tool uses the power of artificial intelligence to automatically group, consolidate and analyze thousands of web application firewall (WAF) security alerts across different environments, including on-premises WAF, in the cloud or across hybrid environments.
Imperva’s Attack Analytics identifies the most critical security alerts, providing security teams a faster way to respond to critical threats.
A survey conducted by Imperva at the recent RSA security conference found that cybersecurity analysts receive more than 1 million security alerts a day. Artificial intelligence tools like Imperva’s Attack Analytics reduce the time-consuming tasks of identifying and prioritizing security alerts from days or weeks of work into mere minutes of work.
Fighting cyberthreats becomes more and more difficult. You don’t have to do it alone. Contact our expert team today and protect your data.
Steve E. Driz, I.S.P., ITCP