New learning algorithm should significantly expand the possible applications of AI

TU Graz computer scientists Robert Legenstein and Wolfgang Maass (from left) are working on energy-efficient AI systems and are inspired by the functioning of the human brain. © Lunghammer - TU Graz www.lunghammer.at

The high energy consumption of artificial neural networks' learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.

Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the energy of a supercomputer.

One of the reasons for this is the efficient transfer of information between neurons in the brain. Neurons send short electrical impulses (spikes) to other neurons – but, to save energy, only as often as absolutely necessary.

Event-based information processing

A working group led by two computer scientists Wolfgang Maass and Robert Legenstein of TU Graz has adopted this principle in the development of the new machine learning algorithm e-prop (short for e-propagation).

Researchers at the Institute of Theoretical Computer Science, which is also part of the European lighthouse project Human Brain Project, use spikes in their model for communication between neurons in an artificial neural network.

The spikes only become active when they are needed for information processing in the network. Learning is a particular challenge for such less active networks, since it takes longer observations to determine which neuron connections improve network performance.

Previous methods achieved too little learning success or required enormous storage space. E-prop now solves this problem by means of a decentralized method copied from the brain, in which each neuron documents when its connections were used in a so-called e-trace (eligibility trace). The method is roughly as powerful as the best and most elaborate other known learning methods. Details have now been published in the scientific journal Nature Communications.

Online instead of offline

With many of the machine learning techniques currently in use, all network activities are stored centrally and offline in order to trace every few steps how the connections were used during the calculations.

However, this requires a constant data transfer between the memory and the processors – one of the main reasons for the excessive energy consumption of current AI implementations. e-prop, on the other hand, works completely online and does not require separate memory even in real operation – thus making learning much more energy efficient.

Driving force for neuromorphic hardware

Maass and Legenstein hope that e-prop will drive the development of a new generation of mobile learning computing systems that no longer need to be programmed but learn according to the model of the human brain and thus adapt to constantly changing requirements.

The goal is to no longer have these computing systems learn energy-intensively exclusively via a cloud, but to efficiently integrate the greater part of the learning ability into mobile hardware components and thus save energy.

First steps to bring e-prop into the application have already been made. For example, the TU Graz team is working together with the Advanced Processor Technologies Research Group (APT) of the University of Manchester in the Human Brain Project to integrate e-prop into the neuromorphic SpiNNaker system, which has been developed there. At the same time, TU Graz is working with researchers from the semiconductor manufacturer Intel to integrate the algorithm into the next version of Intel's neuromorphic chip Loihi.

This research work is anchored in the Fields of Expertise “Human and biotechnology” and “Information, Communication & Computing”, two of the five Fields of Expertise of TU Graz.

TU Graz | Institute of Theoretical Computer Science
Wolfgang MAASS
Em.Univ.-Prof. Dipl.-Ing. Dr.rer.nat.
Phone: +43 316 873 5822
0699 8845 3149
maass@igi.tugraz.at

Robert LEGENSTEIN
Univ.-Prof. Dipl.-Ing. Dr.techn.
Phone: +43 316 873 5824
robert.legenstein@igi.tugraz.at

A solution to the learning dilemma for recurrent networks of spiking neurons in Nature Communications
Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass;
DOI: 10.1038/s41467-020-17236-y

https://www.tugraz.at/institutes/igi/home/ (TU Graz | Institute of Theoretical Computer Science)
http://apt.cs.manchester.ac.uk/projects/SpiNNaker/ (SpiNNaker-System)
https://www.humanbrainproject.eu/en/ (Human Brain Project)

Media Contact

Mag. Christoph Pelzl, MSc Technische Universität Graz

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Pinpointing hydrogen isotopes in titanium hydride nanofilms

Although it is the smallest and lightest atom, hydrogen can have a big impact by infiltrating other materials and affecting their properties, such as superconductivity and metal-insulator-transitions. Now, researchers from…

A new way of entangling light and sound

For a wide variety of emerging quantum technologies, such as secure quantum communications and quantum computing, quantum entanglement is a prerequisite. Scientists at the Max-Planck-Institute for the Science of Light…

Telescope for NASA’s Roman Mission complete, delivered to Goddard

NASA’s Nancy Grace Roman Space Telescope is one giant step closer to unlocking the mysteries of the universe. The mission has now received its final major delivery: the Optical Telescope…