This post is also available in: עברית (Hebrew)
Software vulnerabilities are frequently published on the web, for example on the US National Vulnerability Database — the official register of security vulnerabilities tracked by the National Institute of Standards and Technology (NIST). However, there is a growing need in rating the extent of the threat. A new solution under development extracts software vulnerability data from text on the web, namely on Twitter. It reads millions of tweets for mentions of software security vulnerabilities, and then, using machine-learning-trained algorithm, assesses how much of a threat they represent based on how they’re described.
Researchers at Ohio State University, the security company FireEye, and research firm Leidos found that Twitter can not only predict the majority of security flaws that will show up days later on the National Vulnerability Database — but that they could also use natural language processing to roughly predict which of those vulnerabilities will be given a “high” or “critical” severity rating with better than 80% accuracy.
Alan Ritter, an Ohio State professor who worked on the research argues that the research’s real advancement is in accurately ranking the severity of vulnerabilities based on an automated analysis of human language, according to wired.com.
The developers hope that someday the system could serve as a powerful aggregator of fresh information for systems administrators trying to keep their systems protected, or at the very least a component in commercial vulnerability data feeds — weighted for importance.
The researchers began by taking a subset of 6,000 tweets they’d identified as discussing security vulnerabilities. They showed them to a collection of Amazon Mechanical Turk (an Amazon crowdsourcing marketplace service) workers who labeled them with human-generated rankings of severity, filtering out the results from any outliers who drastically disagreed with other readers.
Then the researchers used those labeled tweets as training data for a machine learning engine and tested its predictions. Looking five days ahead of a vulnerability’s inclusion in the National Vulnerability database, they could predict the severity of the 100 most severe vulnerabilities based on the NVD’s own severity ranking with 78 percent accuracy.
The team stresses that their toll might be best used as a component in a broader feed of vulnerability data curated by a human being.
But given the accelerating pace of vulnerability discovery and the growing sea of social media chatter about them, Ritter suggests it might be an increasingly important tool to find the signal in the noise.