This post is also available in: עברית (Hebrew)
Traditional artificial intelligence (AI) systems are built on artificial neural networks that mimic the human brain’s neurons. However, these networks require vast amounts of computational power and energy to process data, making them unsustainable for widespread use. One promising solution to this problem comes from spiking neurons, a type of neural network that mimics how biological neurons work by communicating through short voltage pulses known as “spikes.” These networks are much more energy-efficient but have historically been difficult to train effectively.
A recent research by the University of Bonn, detailed on TechXplore, offers new hope in overcoming these training challenges. In their study, which was published in Physical Review Letters, the team demonstrated a novel approach to training spiking neural networks, opening the door to more efficient AI systems. Unlike conventional artificial neurons, which continuously exchange information, spiking neurons communicate intermittently, firing only when needed. This approach allows spiking networks to use significantly less energy, similar to how our brain operates.
Training a neural network involves adjusting the connections between neurons so the system can learn to perform tasks, such as distinguishing between objects. In traditional networks, these adjustments happen gradually. However, spiking neurons don’t allow for this fine-tuning because spikes are either present or absent, without intermediary values. This made applying conventional training methods, like gradient descent learning, difficult in spiking networks.
The breakthrough from the University of Bonn lies in understanding that while spikes can’t be modified directly, their timing can. By adjusting when spikes occur rather than their intensity, the researchers found that the timing of spikes could be fine-tuned to improve network performance. This breakthrough allows the use of traditional training methods in spiking networks.
The researchers successfully trained a spiking neural network to accurately distinguish handwritten numbers, proving the method’s potential. Next, they aim to apply the technique to more complex tasks, such as speech recognition. This approach could pave the way for more energy-efficient and scalable AI applications in the future.