Enhanced Cybersecurity for Machine Learning Systems

Enhanced Cybersecurity for Machine Learning Systems

Machine Leraning

This post is also available in: heעברית (Hebrew)

Machine learning (ML) is used in a diverse array of applications – from highly efficient manufacturing, and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm. Not enough attention is paid to cyber vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems.

For example, AI misclassifications and misinterpretations at the pixel level could lead to image misinterpretation and mislabeling scenarios, or subtle modifications to real-world objects could confuse AI perception systems. 

The US Defense Advanced Research Projects Agency (DARPA) is developing a new generation of defenses to thwart attempts to deceive machine learning algorithms.

Intel and the Georgia Institute of Technology (Georgia Tech) were selected to lead the DARPA program team GARD: Guaranteeing Artificial Intelligence (AI) Robustness against Deception. Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning models.

Interested in learning more about cyber innovation? Attend i-HLS’ InnoTech Expo in Tel Aviv – Israel’s largest innovation, HLS, and cyber technologies expo – on November 18-19, 2020. Meet InnoTech’s steering committee

Current defense efforts are designed to protect against specific pre-defined adversarial attacks, but remain vulnerable to attacks when tested outside their specified design parameters. GARD intends to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in given scenarios that could cause an ML model to misclassify or misinterpret data.

According to businesswire.com, the goal of the GARD program is to establish theoretical ML system foundations that will not only identify system vulnerabilities and characterize properties to enhance system robustness, but also promote the creation of effective defenses. Through these program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their effectiveness.

In the first phase, Intel and Georgia Tech are enhancing object detection technologies through spatial, temporal and semantic coherence for both still images and videos. Intel is committed to driving AI and ML innovation and believes that working with skilled security researchers across the globe is a crucial part of addressing potential security vulnerabilities.

Attend i-HLS’ InnoTech Expo in Tel Aviv – Israel’s largest innovation, HLS, and cyber technologies expo – on November 18-19, 2020 at Expo Tel Aviv, Pavilion 2.

For details and registration