This post is also available in: עברית (Hebrew)
At a crucial time when the development and deployment of AI are rapidly evolving, experts are looking at ways we can use quantum computing to protect AI from its vulnerabilities.
Machine learning is a field of artificial intelligence where computer models become experts in various tasks by consuming large amounts of data, instead of a human explicitly programming their level of expertise. These algorithms do not need to be taught but rather learn from seeing examples, similar to how a child learns.
But despite their popularity and innovation, machine learning-based frameworks are very vulnerable to malicious tampering with their data causing them to fail in surprising ways.
An example is image-classifying models that can often be fooled by the addition of perturbations to their input images, which raises questions about the safety of deploying machine learning neural networks in potentially life-threatening situations. For example, in self-driving cars, the system could be confused into driving through an intersection by a simple piece of graffiti on a stop sign.
According to Techxplore, recent advances in quantum computing have generated great excitement about the prospect of enhancing machine learning with quantum computers. It is believed quantum machine learning models can learn certain types of data drastically faster than any model designed for current or “classical” computers.
However, it is still unclear how widespread these speedups will be and how useful quantum machine learning will be in practice. The reason is that although quantum computers are expected to efficiently learn a wider class of models than their classical counterparts, there’s no guarantee these new models will be useful for most machine-learning tasks people are actually interested in (like medical classification problems or generative AI systems).
These challenges motivated a team of researchers from the University of Melbourne to consider what other benefits quantum computing could bring to machine learning tasks—other than the usual goals of improving efficiency or accuracy.
The team suggests quantum machine learning models may be better protected against classical computer adversarial attacks (that work by identifying and exploiting the features used by a machine learning model).
Since the features used by generic quantum machine learning models are inaccessible to classical computers, they are invisible to a malicious actor armed only with classical computing resources.
While this is encouraging, quantum machine learning continues to face significant challenges, a big one being the massive capability gap that separates classical and quantum computing hardware. Another is their limited size and their high error rates that prevent them from carrying out long calculations.
Nevertheless, although there are still significant engineering challenges, if they can be solved, the unique capabilities of large-scale quantum computers will undoubtedly provide surprising benefits across many important fields.