Close Menu
    Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
    TopBuzzMagazine.com
    Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
    • Home
    • Movies
    • Television
    • Music
    • Fashion
    • Books
    • Science
    • Technology
    • Cover Story
    • Contact
      • About
      • Amazon Disclaimer
      • Terms and Conditions
      • Privacy Policy
      • DMCA / Copyrights Disclaimer
    TopBuzzMagazine.com
    Home»Science»Cracking the code for materials that can learn
    Science

    Cracking the code for materials that can learn

    By AdminDecember 9, 2024
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Cracking the code for materials that can learn


    Not so simple machines: Cracking the code for materials that can learn
    Obtaining the gradient of what’s known as the loss function is an essential step to establish the backpropagation algorithm developed by University of Michigan researchers to train a material. The researchers showed the gradient could be found experimentally using a 3D-printed model of a mechanical neural network. Credit: S. Li and X. Mao, Nature Communications. 2024, DOI: 10.1038/s41467-024-54849-z)

    It’s easy to think that machine learning is a completely digital phenomenon, made possible by computers and algorithms that can mimic brain-like behaviors. But the first machines were analog and now, a small but growing body of research is showing that mechanical systems are capable of learning, too. Physicists at the University of Michigan have provided the latest entry into that field of work.

    The U-M team of Shuaifeng Li and Xiaoming Mao devised an algorithm that provides a mathematical framework for how learning works in lattices called mechanical neural networks.

    “We’re seeing that materials can learn tasks by themselves and do computation,” Li said.

    The researchers have shown how that algorithm can be used to “train” materials to solve problems, such as identifying different species of iris plants. One day, these materials could create structures capable of solving even more advanced problems—such as airplane wings that optimize their shape for different wind conditions—without humans or computers stepping in to help.

    That future is a ways off, but insights from U-M’s new research could also provide more immediate inspiration for researchers outside the field, said Li, a postdoctoral researcher.

    The algorithm is based on an approach called backpropagation, which has been used to enable learning in both digital and optical systems. Because of the algorithm’s apparent indifference to how information is carried, it could also help open new avenues of exploration into how living systems learn, the researchers said.

    “We’re seeing the success of backpropagation theory in many physical systems,” Li said. “I think this might also help biologists understand how biological neural networks in humans and other species work.”

    Li and Mao, a professor in the U-M Department of Physics, published their new study in the journal Nature Communications.

    MNNs 101

    The idea of using physical objects in computation has been around for decades. But the focus on mechanical neural networks is newer, with interest growing alongside other recent advances in artificial intelligence.

    Most of those advances—and certainly the most visible ones—have been in the realm of computer technology. Hundreds of millions of people are turning to AI-powered chatbots, such as ChatGPT, every week for help writing emails, planning vacations and more.

    These AI assistants are based on artificial neural networks. Although their workings are complex and largely hidden from view, they provide a useful analogy to understand mechanical neural networks, Li said.

    When using a chatbot, a user types an input command or question, which is interpreted by a neural network algorithm running on a computer network with oodles of processing power. Based on what that system has learned from being exposed to vast amounts of data, it generates a response, or output, that pops up on the user’s screen.

    A mechanical neural network, or MNN, has the same basic elements. For Li and Mao’s study, the input was a weight affixed to a material, which acts as the processing system. The output was how the material changed its shape due to the weight acting on it.

    “The force is the input information and the material itself is like the processor, and the deformation of the materials is the output or response,” Li said.

    For this study, the “processor” materials were rubbery 3D-printed lattices, made of tiny triangles that made larger trapezoids. The materials learn by adjusting the stiffness or flexibility of specific segments within that lattice.

    Not so simple machines: Cracking the code for materials that can learn
    University of Michigan researchers showed how mechanical neural networks—the black lattices of triangles—could be trained to solve problems. Credit: Shuaifeng Li

    To realize their futuristic applications—like the airplane wings that tune their properties on the fly—MNNs will need to be able to adjust those segments on their own. Materials that can do that are being researched, but you can’t yet order them from a catalog.

    So Li modeled this behavior by printing out new versions of a processor with a thicker or thinner segment to get the desired response. The main contribution of Li and Mao’s work is the algorithm that instructs a material on how to adapt those segments.

    Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights.
    Sign up for our free newsletter and get updates on breakthroughs,
    innovations, and research that matter—daily or weekly.

    How to train your MNN

    Although the mathematics behind the backpropagation theory is complex, the idea itself is intuitive, Li said.

    To kick off the process, you need to know what your input is and how you want the system to respond. You then apply the input and see how the actual response differs from what’s desired. The network then takes that difference and uses it to inform how it changes itself to get closer to the desired output over subsequent iterations.

    Mathematically, the difference between the real output and the desired output corresponds to an expression called the loss function. It’s by applying a mathematical operator known as a gradient to that loss function that the network learns how to change.

    Li showed that if you know what to look for, his MNNs provide that information.

    “It can show you the gradient automatically,” Li said, adding that he had some help from cameras and computer code in this study. “It’s really convenient and it’s really efficient.”

    Consider the case where a lattice is composed entirely of segments with equal thickness and rigidity. If you hang a weight from a central node—the point where segments meet—its neighboring nodes on the left and right would move down the same amount because of the system’s symmetry.

    But suppose, instead, you wanted to create a lattice that gave you not just an asymmetric response, but the most asymmetric response. That is, you wanted to create a network that gives the maximum difference in the movement between a node to the weight’s left and a node to its right.

    Li and Mao used their algorithm and a simple experimental setup to create the lattice that gives that solution. (Another similarity to biology is that the approach only cares about what nearby connections are doing, similar to how neurons operate, Li said.)

    Taking it a step further, the researchers also provided large datasets of input forces, akin to what’s done in machine learning on computers, to train their MNNs.

    In one example of this, different input forces corresponded to different sizes of petals and leaves on iris plants, which are defining features that help differentiate between species. Li could then present a plant of unknown species to the trained lattice and it could correctly sort it.

    And Li is already working to build up the complexity of the system and the problems it can solve using MNNs that carry sound waves.

    “We can encode so much more information into the input,” Li said. “With sound waves, you have the amplitude, the frequency and the phase that can encode data.”

    At the same time, the U-M team is also studying broader classes of networks in materials, including polymers and nanoparticle assemblies. With these, they can create new systems where they can apply their algorithm and work toward achieving fully autonomous learning machines.

    More information:
    Training all-mechanical neural networks for task learning through in situ backpropagation, Nature Communications (2024). DOI: 10.1038/s41467-024-54849-z

    Provided by
    University of Michigan


    Citation:
    Not so simple machines: Cracking the code for materials that can learn (2024, December 9)
    retrieved 9 December 2024
    from https://phys.org/news/2024-12-simple-machines-code-materials.html

    This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
    part may be reproduced without the written permission. The content is provided for information purposes only.

    View Original Source Here

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Orcas filmed making out in the wild for first time

    June 27, 2025

    Mystery fireball spotted plummeting to Earth over the US

    June 27, 2025

    New IQ research shows why smarter people make better decisions

    June 26, 2025

    ‘God-king’ born from incest in ancient Ireland wasn’t a god or a king, new study finds

    June 26, 2025

    Generation Alpha’s coded language makes online bullying hard to detect

    June 25, 2025

    Pulsars could have tiny mountains

    June 25, 2025
    popular posts

    Tekken 8 Trailer Welcomes Nina Williams Back to the Series

    Books & Looks Podcast: Chikodi Anunobi on “The Thief and

    Samsung One UI 5.0 Unveiled at SDC 2022, Galaxy S22

    How aging alters brain cells’ ability to maintain memory

    Calling all fans of Terry Pratchett

    Elite Season 6 Release Date Confirmed: Is Samuel Really Dead?

    Moon Knight Episode 6 Recap & Thoughts: An Appropriately Epic

    Categories
    • Books (3,252)
    • Cover Story (2)
    • Events (18)
    • Fashion (2,421)
    • Interviews (43)
    • Movies (2,552)
    • Music (2,830)
    • News (154)
    • Science (4,402)
    • Technology (2,545)
    • Television (3,274)
    • Uncategorized (932)
    Archives
    Facebook X (Twitter) Instagram Pinterest YouTube Reddit TikTok
    © 2025 Top Buzz Magazine. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms of Use and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
    Do not sell my personal information.
    Cookie SettingsAccept
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT