Cybersecurity Experts Use AI to Detect Breaches

image provided by pixabay

This post is also available in: עברית (Hebrew)

With the ever-increasing threat of data breaches and privacy violations and the growing frequency and sophistication of cyber threats, cybersecurity analysts are working around the clock to sift through massive amounts of data, monitor potential security incidents, and prevent the next attack. While sifting through these vast streams of information is difficult for a person, too much data has never been a problem for artificial intelligence – this is why many experts are looking to AI-based solutions to bolster cybersecurity strategies and ease the strain on analysts.

A team of experts from the Networking and Cybersecurity Division of USC’s Information Sciences Institute, led by Stephen Schwab, envision symbiotic teams of humans and AIs collaborating to improve security so that AI can assist analysts and improve their overall performance in these high-stakes environments.

David Balenson, associate director of the Networking and Cybersecurity division, emphasizes how automation is critical in alleviating the burden on cybersecurity analysts. “SOCs [security operation centers] are flooded with alerts that analysts have to analyze rapidly in real time, and decide which are symptoms of a real incident. That’s where AI and automation come into play, spotting trends or patterns of alerts that could be potential incidents.”

However, according to Techxplore, one of the main challenges in integrating AI into cybersecurity operations is the lack of transparency and explainability in many AI systems. Schwab explains: “Machine learning (ML) is useful for monitoring networks and end-systems where human analysts are fatigued. Yet they are a black box—they can throw off alerts that may seem inexplicable. This is where explainability comes in, as the human analyst has to trust that the ML system is operating within reason.”

The proposed solution is building “explainers” that present the system’s actions in words the analyst can understand – for example, users type a PIN code when authenticating to a system, but different people punch in the data in different patterns, which the AI might flag even if the code was entered correctly. While these suspicious patterns might not actually be security breaches, the AI still factors them into consideration and enables the human analyst to make more informed decisions.

This information was provided by Techxplore.