• Home
  • Science
  • Science News
  • Analogue Deep Learning Offers Faster AI Computation With Lower Energy Consumption, MIT Researchers Say

Analogue Deep Learning Offers Faster AI Computation With Lower Energy Consumption, MIT Researchers Say

Programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the required cost and energy.

Analogue Deep Learning Offers Faster AI Computation With Lower Energy Consumption, MIT Researchers Say

Analogue deep learning offers faster AI computation

Highlights
  • Analogue Deep Learning promises faster computation with less energy
  • The findings of the research were published in the journal 'Science'
  • The devices could run 1 million times faster than human brain
Advertisement

The amount of time, effort, and money needed to train ever-more-complex neural network models are soaring as researchers push the limits of machine learning. Analogue deep learning, a new branch of artificial intelligence, promises faster computation with less energy consumption.

The findings of the research were published in the journal 'Science'. Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analogue artificial "neurons" and "synapses" that execute computations just like a digital neural network.

This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analogue synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

"With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages," said senior author Jesus A. del Alamo, the Donner Professor in MIT's Department of Electrical Engineering and Computer Science (EECS). "This work has really put these devices at a point where they now look really promising for future applications."

"The working mechanism of the device is the electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field and push these ionic devices to the nanosecond operation regime," explained senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

"The action potential in biological cells rises and falls with a timescale of milliseconds since the voltage difference of about 0.1 volts is constrained by the stability of water," said senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, "Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices."

These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

"Once you have an analogue processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft," added lead author and MIT postdoc Murat Onen.


Gaana CEO and Spotify's India chief join us on Orbital, the Gadgets 360 podcast, to discuss India's unique music streaming landscape. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Artificial Intelligence
Infinix Smart 6 Plus With Dual Rear Cameras, 5,000mAh Battery Launched in India: Price, Specifications
Miami City to Make Splash in NFT Arena in Partnership with Mastercard, Time USA, Salesforce
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »