The AI Race Over Cybersecurity and Cybercrime

The AI Race Over Cybersecurity and Cybercrime

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Not long after the public was introduced to artificial intelligence models like ChatGPT, scammers launched programs like FraudGPT to assist criminals by crafting an easily tailored cyberattack.

Companies nowadays are embracing cyber defenses based on generative AI in the hope of outpacing attackers’ use of similar tools. But experts warn that there’s more effort needed to safeguard the data and algorithms behind the generative AI models, for fear that they would fall victim to cyberattacks.

In a recently released IBM survey, 84% of corporate executives who responded said they would “prioritize generative AI security solutions over conventional ones” for cybersecurity purposes. IBM said it is developing cybersecurity solutions based on generative AI models to “improve the speed, accuracy, and efficacy of threat detection and response capabilities and drastically increase the productivity of security teams.”

According to Techxplore, cybersecurity firm Darktrace is deploying custom-built generative AI models for cybersecurity purposes, using AI to predict potential attacks, and designing proprietary self-learning AI models that observe and understand the behavior of the environment that they’re deployed within. The system maps the activities of individuals, peer groups, and outliers, and is then able to detect deviations from normal and provide a context for such deviations, allowing security experts to act.

Jose-Marie Griffiths, president of Dakota State University, who previously served on the congressional National Security Commission on Artificial Intelligence, states that in addition to detecting anomalies and aiding in investigations of a cyberattack, AI tools ought to be useful in analyzing malware to determine the origins of attackers- “Reverse engineering a malware to identify who sent it, what was the intent, is one area where we haven’t seen a lot of use of AI tools, but we could potentially see quite a bit of work, and that’s an area we are interested in.”

Griffiths also warned that while the use of generative AI models to improve cybersecurity is gaining momentum, security experts must also pay attention to safeguarding the generative AI models themselves because attackers could attempt to break into the models and their underlying data.