This post is also available in:
עברית (Hebrew)
Neural networks have revolutionized various fields by mastering specific tasks. However, a significant challenge known as “catastrophic forgetting” arises when these models are tasked with learning new information. While they can successfully adapt to new assignments, they often lose the ability to perform their original tasks. This issue complicates the continuous learning necessary for applications like self-driving cars, which require constant updates without reprogramming from scratch.
Drawing inspiration from the remarkable flexibility of biological brains, researchers at the California Institute of Technology have developed an innovative algorithm that allows neural networks to update continuously with new data while retaining previously learned information. Named the functionally invariant path (FIP) algorithm, this breakthrough could enhance a variety of applications.
According to TechXplore, The FIP algorithm emerged from the lab of Matt Thomson, an assistant professor of computational biology and a Heritage Medical Research Institute Investigator. Thomson, alongside former graduate student Guru Raghavan, Ph.D., was motivated by the work of Research Professor Carlos Lois, who studies how birds can rewire their brains to regain singing abilities after injury. They sought to replicate this adaptive capability in artificial neural networks.
The FIP algorithm utilizes a mathematical technique called differential geometry, enabling neural networks to be modified without losing previously encoded data. This approach not only addresses the issue of catastrophic forgetting but also enhances the overall functionality of these models.
In 2022, Raghavan and Thomson co-founded a company called Yurts to further develop the FIP algorithm and scale machine learning systems across various industries. The research has been published in Nature Machine Intelligence, highlighting the collaborative efforts of co-authors, including graduate students Surya Narayanan Hari and Shichen Rex Liu, and international collaborator Bahey Tharwat from Alexandria University.
This groundbreaking work represents a significant leap forward in making artificial intelligence more adaptive and robust, potentially transforming how machines learn and evolve over time.

























