What Can Be Done Against AI-Based Malware?

What Can Be Done Against AI-Based Malware?

ai-based malware

This post is also available in: heעברית (Hebrew)

Artificial intelligence could pose both opportunities and challenges as reported by the Center for a New American Security (CNAS), one of America’s top defense and foreign policy think tanks. Indeed, AI has made it possible for our devices and applications to better understand the world around them. But hackers can use that same technology to develop smart malware that can prey on its targets and detect them out of millions of users.

IBM researchers have created DeepLocker, a proof-of-concept project that shows the destructive powers of AI-powered malware.

Most traditional malware is designed to perform its damaging functions on every device they find their way into. This is suitable when the attackers’ goal is to inflict maximum damage, but it is not effective when malicious actors want to attack a specific target.

In such cases, they have to “spray and pray,” as Marc Stoecklin, cybersecurity scientist at IBM Research, says, infecting a large number of targets and hoping their target is among them.

In contrast, AI-powered malware such as DeepLocker can use publicly available technology to hide from security tools while spreading across thousands of computers. DeepLocker only executes its malicious payload when it detects its intended target through AI techniques, such as facial or voice recognition.

“This AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected,” Stoecklin says. “But, unlike nation-state malware, it is feasible in the civilian and commercial realms.”

DeepLocker uses deep learning algorithms to perform tasks that were previously impossible with traditional software structures. But it also makes it very difficult for contemporary endpoint security solutions to find malware that use deep learning.

While antivirus tools are designed to detect malware by looking for specific signatures in their binary files or the commands they execute, deep learning algorithms are black boxes, which means it’s hard to make sense of their inner workings or reverse-engineer their functions to figure out how their work.

To demonstrate the danger of AI-powered malware, the researchers at IBM armed DeepLocker with the popular ransomware WannaCry and integrated it into an innocent-looking video-conferencing application. The malware remained undetected by analysis tools.

Hackers can use AI to help their malware evade detection for weeks, months, or even years, making the chances of infection and success skyrocket.

DeepLocker’s AI has been trained to look for the face of a specific person. For all users except the target, the application works perfectly fine. But as soon as the intended victim shows their face to the webcam, DeepLocker unleashes the wrath of WannaCry on the user’s computer and starts to encrypt all the files on the hard drive, as reported by dailydot.com.

What can be done against this threat?  Current security tools are not fit to fight the AI-powered malware, and we need new technologies and measures to protect ourselves. “The security community should focus on monitoring and analyzing how apps are behaving across user devices, and flagging when a new app is taking unexpected actions such as using excessive network bandwidth, disk access, accessing sensitive files, or attempting to circumvent security features,” says Stoecklin.

We can also leverage AI to detect and block AI-based attacks. Just as malware can use AI to learn common patterns of behavior in security tools and circumvent them, security solutions can employ AI to learn common behaviors of apps and help flag unexpected app behaviors.