This post is also available in: heעברית (Hebrew)

How can you train increasingly complicated neural network models faster with less materials? Some researchers suggest investing in a new branch of artificial intelligence called ‘analog deep learning’. Analog deep learning promises faster processing with far less energy consumption. Researchers have developed a network of analog artificial “neurons” and “synapses” that can do calculations similarly to a digital neural network by repeating arrays of programmable resistors in intricate layers. Then, this network may be trained using complex AI tasks like image recognition and natural language processing. writes that there are two main reasons due to which analog deep learning is faster and more efficient than its digital version. The main factor is that computations are carried out in memory, preventing massive amounts of data from being repeatedly transported from memory to a processor. Analog processors also carry out parallel processes. Analog machine learning can be possible with a processor by varying the electrical conductivity of protonic resistors. Learning occurs in the human brain due to the strengthening and weakening of synapses, the connections between neurons. Since its inception, deep neural networks have employed this analogy, in which training procedures are used to design the network weights.

Additionally, analog deep learning can endure extremely powerful, pulsed electric fields. The resistor can successfully run for millions of cycles without failing since protons do not harm the material, making it a million times faster. Moreover, it can function efficiently at ambient temperature, making it appropriate for integration into computing devices.

Prepared to dive into the world of futuristic technology? Attend INNOTECH 2022, the international convention and exhibition for cyber, HLS and innovation at Expo, Tel Aviv, on November 2nd – 3rd

Interested in sponsoring / a display booth at the 2022 INNOTECH exhibition? Click here for details!