It’s easy to think that machine learning is a completely digital phenomenon, made possible by computers and algorithms that can mimic brain-like behaviors. But the first machines were analog and now, a small but growing body of research is showing that mechanical systems are capable of learning, too. Physicists at the University of Michigan have provided the latest entry into that field of work.
The U-M team of Shuaifeng Li and Xiaoming Mao devised an algorithm that provides a mathematical framework for how learning works in lattices called mechanical neural networks.
“We’re seeing that materials can learn tasks by themselves and do computation,” Li said.
The researchers have shown how that algorithm can be used to “train” materials to solve problems, such as identifying different species of iris plants. One day, these materials could create structures capable of solving even more advanced problems—such as airplane wings that optimize their shape for different wind conditions—without humans or computers stepping in to help.
That future is a ways off, but insights from U-M’s new research could also provide more immediate inspiration for researchers outside the field, said Li, a postdoctoral researcher.
The algorithm is based on an approach called backpropagation, which has been used to enable learning in both digital and optical systems. Because of the algorithm’s apparent indifference to how information is carried, it could also help open new avenues of exploration into how living systems learn, the researchers said.
“We’re seeing the success of backpropagation theory in many physical systems,” Li said. “I think this might also help biologists understand how biological neural networks in humans and other species work.”
Li and Mao, a professor in the U-M Department of Physics, published their new study in the journal Nature Communications.
MNNs 101
The idea of using physical objects in computation has been around for decades. But the focus on mechanical neural networks is newer, with interest growing alongside other recent advances in artificial intelligence.
Most of those advances—and certainly the most visible ones—have been in the realm of computer technology. Hundreds of millions of people are turning to AI-powered chatbots, such as ChatGPT, every week for help writing emails, planning vacations and more.
These AI assistants are based on artificial neural networks. Although their workings are complex and largely hidden from view, they provide a useful analogy to understand mechanical neural networks, Li said.
When using a chatbot, a user types an input command or question, which is interpreted by a neural network algorithm running on a computer network with oodles of processing power. Based on what that system has learned from being exposed to vast amounts of data, it generates a response, or output, that pops up on the user’s screen.
A mechanical neural network, or MNN, has the same basic elements. For Li and Mao’s study, the input was a weight affixed to a material, which acts as the processing system. The output was how the material changed its shape due to the weight acting on it.
“The force is the input information and the material itself is like the processor, and the deformation of the materials is the output or response,” Li said.
For this study, the “processor” materials were rubbery 3D-printed lattices, made of tiny triangles that made larger trapezoids. The materials learn by adjusting the stiffness or flexibility of specific segments within that lattice.
To realize their futuristic applications—like the airplane wings that tune their properties on the fly—MNNs will need to be able to adjust those segments on their own. Materials that can do that are being researched, but you can’t yet order them from a catalog.
So Li modeled this behavior by printing out new versions of a processor with a thicker or thinner segment to get the desired response. The main contribution of Li and Mao’s work is the algorithm that instructs a material on how to adapt those segments.
Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights.
Sign up for our free newsletter and get updates on breakthroughs,
innovations, and research that matter—daily or weekly.
How to train your MNN
Although the mathematics behind the backpropagation theory is complex, the idea itself is intuitive, Li said.
To kick off the process, you need to know what your input is and how you want the system to respond. You then apply the input and see how the actual response differs from what’s desired. The network then takes that difference and uses it to inform how it changes itself to get closer to the desired output over subsequent iterations.
Mathematically, the difference between the real output and the desired output corresponds to an expression called the loss function. It’s by applying a mathematical operator known as a gradient to that loss function that the network learns how to change.
Li showed that if you know what to look for, his MNNs provide that information.
“It can show you the gradient automatically,” Li said, adding that he had some help from cameras and computer code in this study. “It’s really convenient and it’s really efficient.”
Consider the case where a lattice is composed entirely of segments with equal thickness and rigidity. If you hang a weight from a central node—the point where segments meet—its neighboring nodes on the left and right would move down the same amount because of the system’s symmetry.
But suppose, instead, you wanted to create a lattice that gave you not just an asymmetric response, but the most asymmetric response. That is, you wanted to create a network that gives the maximum difference in the movement between a node to the weight’s left and a node to its right.
Li and Mao used their algorithm and a simple experimental setup to create the lattice that gives that solution. (Another similarity to biology is that the approach only cares about what nearby connections are doing, similar to how neurons operate, Li said.)
Taking it a step further, the researchers also provided large datasets of input forces, akin to what’s done in machine learning on computers, to train their MNNs.
In one example of this, different input forces corresponded to different sizes of petals and leaves on iris plants, which are defining features that help differentiate between species. Li could then present a plant of unknown species to the trained lattice and it could correctly sort it.
And Li is already working to build up the complexity of the system and the problems it can solve using MNNs that carry sound waves.
“We can encode so much more information into the input,” Li said. “With sound waves, you have the amplitude, the frequency and the phase that can encode data.”
At the same time, the U-M team is also studying broader classes of networks in materials, including polymers and nanoparticle assemblies. With these, they can create new systems where they can apply their algorithm and work toward achieving fully autonomous learning machines.
More information:
Training all-mechanical neural networks for task learning through in situ backpropagation, Nature Communications (2024). DOI: 10.1038/s41467-024-54849-z
Provided by
University of Michigan
Citation:
Not so simple machines: Cracking the code for materials that can learn (2024, December 9)
retrieved 9 December 2024
from https://phys.org/news/2024-12-simple-machines-code-materials.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.