This post is also available in:
עברית (Hebrew)
Artificial intelligence systems are increasingly valuable, and increasingly targeted. A new study from North Carolina State University presents the first working defense mechanism capable of protecting AI models from cryptanalytic attacks, which is a method hackers use to mathematically extract the parameters that define how an AI system operates.
Unlike conventional hacking, which exploits software vulnerabilities, cryptanalytic attacks work like digital espionage through observation. By feeding an AI model various inputs and analyzing its outputs, attackers can reverse-engineer its internal parameters — effectively cloning the system without needing direct access to its code or servers. For companies, defense agencies, and research institutions that rely on proprietary algorithms, this poses a serious security risk.
According to TechXplore, the new approach focuses on the architecture of neural networks — the layered structures that allow AI systems to process information. The researchers discovered that these attacks rely on differences between neurons within a layer. The more varied those neurons are, the easier it is to identify the mathematical relationships that define the network.
To block that process, the team developed a method of training neural networks so that neurons in the same layer behave more similarly. This creates what they call a “barrier of similarity,” preventing attackers from isolating distinct neuron patterns without affecting the model’s normal function. In testing, models retrained with this defense showed less than a 1% change in accuracy but became resistant to extraction attempts that had previously succeeded within hours. The researchers also introduced a framework for measuring how resistant a model is to such attacks. This allows organizations to assess their systems’ vulnerability without running lengthy simulations.
For defense and national security, the implications are significant. AI increasingly underpins mission-critical systems — from intelligence analysis and autonomous platforms to cybersecurity operations. Protecting those algorithms from being copied or reverse-engineered is essential for maintaining strategic advantage and safeguarding sensitive data.
While this new defense is still in the research stage, it highlights a growing recognition that artificial intelligence needs its own form of cybersecurity. As AI systems become integral to critical infrastructure, finance, defense, and everyday services, protecting their core algorithms will be as important as safeguarding the data they use. The work signals the beginning of a new era in digital protection — one where securing the intelligence behind the machine is just as vital as securing the networks that run it.
The research was published here.

























