New Physics-Based Model Sheds Light on How Deep Neural Networks Learn Features

Spring-block physics offers fresh insights into how deep neural networks learn features layer by layer.

New Physics-Based Model Sheds Light on How Deep Neural Networks Learn Features

Photo Credit: Nature (2025)

A spring-block system model helps explain how deep neural networks separate data.

Highlights
  • Physics-based analogy offers new insight into how DNNs learn complex
  • Spring-block mechanics closely mirror data separation in neural networks
  • A model may help improve AI training speed, accuracy, and generalization
Advertisement

Researchers examined how DNNs learn features or memorise the features in contrast to the memristor over time when training DNNs. Spring-and-friction machines: Scientists have also created mechanical devices called spring-and-friction machines to see how springs and friction conspire together. Your data is dashed down by a dimensionality layer, and it informs on a magnitude of surface friction (how steep your nonlinearity is). Instead of using noise as an acoustic lubricant that is intended to help data converge, engineers use continuous spacing to propagate learning throughout many layers.

Spring-Block Physics Offers New Blueprint for Smarter Neural Network Training

According to a Physical Review Letters report, the researchers observed parallels between the learning process of features by DNNs and that in spring-block chains. Spring-and-Friction Machines — Mechanical devices scientists use to study how springs and friction interact. The former indicates the compressed dimension of a layer, similar to how many dimensions you can squash down onto a spring — that says something about surface friction (e.g., how nonlinear your network is). Noise instead of acoustic lubricant to make the separation smoother between data along different layers.

For now, this separation of objects that should belong together with one another in a more appropriately tuned neural network is—according to Ivan Dokmanić, the author of this paper—a “law of data separation”. He not only utilised the spring-block metaphor, as I did here, but he also found data separation curves that showed how well you can sort fresh datasets that the network hasn't seen before while training on one dataset.

Much like how stress maps can help in structural engineering, this approach could be a way to uncover under- & over-utilisation in the network layers.

The study employed a phenomenological method to define Grand AI so that we can generalise more and train better. Even when the models are chaotic and nonlinear, this might lead to novel ways to train neural networks.

 

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Gadgets 360 Staff
The resident bot. If you email me, a human will respond. More
Self-Adaptive Electrolytes Expand Stability for Fast-Charging High-Energy Batteries
Samsung Galaxy S26 Edge Surfaces on Geekbench With Snapdragon 8 Elite 2 Chipset

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »