Close Menu
Ztoog
    What's Hot
    The Future

    Insomniac finally responds to hack that leaked ‘Wolverine’ game and more

    Crypto

    Ethereum Whales Ready For Next Leg-Up After Buying 56,000 ETH

    Gadgets

    TDK PiezoTap Switches For Touch-Enabled Products

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      OPPO launches A5 Pro 5G: Premium features at a budget price

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

    • Technology

      What It Is and Why It Matters—Part 1 – O’Reilly

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Nothing is stronger than quantum connections – and now we know why

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

    • AI

      Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

    • Crypto

      Ethereum Breaks Key Resistance In One Massive Move – Higher High Confirms Momentum

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

    Ztoog
    Home » The brain may learn about the world the same way some computational models do | Ztoog
    AI

    The brain may learn about the world the same way some computational models do | Ztoog

    Facebook Twitter Pinterest WhatsApp
    The brain may learn about the world the same way some computational models do | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    To make our way via the world, our brain should develop an intuitive understanding of the bodily world round us, which we then use to interpret sensory info coming into the brain.

    How does the brain develop that intuitive understanding? Many scientists consider that it may use a course of much like what’s often called “self-supervised learning.” This sort of machine studying, initially developed as a way to create extra environment friendly models for laptop imaginative and prescient, permits computational models to learn about visible scenes based mostly solely on the similarities and variations between them, with no labels or different info.

    A pair of research from researchers at the Okay. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT affords new proof supporting this speculation. The researchers discovered that once they skilled models often called neural networks utilizing a selected sort of self-supervised studying, the ensuing models generated exercise patterns similar to these seen in the brains of animals that had been performing the same duties as the models.

    The findings counsel that these models are capable of learn representations of the bodily world that they will use to make correct predictions about what’s going to occur in that world, and that the mammalian brain may be utilizing the same technique, the researchers say.

    “The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”

    Nayebi is the lead creator of one among the research, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an affiliate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an affiliate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an affiliate member of the McGovern Institute, is the senior creator of the different examine, which was co-led by Mikail Khona, an MIT graduate scholar, and Rylan Schaeffer, a former senior analysis affiliate at MIT.

    Both research can be introduced at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

    Modeling the bodily world

    Early models of laptop imaginative and prescient primarily relied on supervised studying. Using this strategy, models are skilled to categorise photographs which can be every labeled with a reputation — cat, automotive, and many others. The ensuing models work effectively, however the sort of coaching requires a substantial amount of human-labeled information.

    To create a extra environment friendly different, in recent times researchers have turned to models constructed via a way often called contrastive self-supervised studying. This sort of studying permits an algorithm to learn to categorise objects based mostly on how related they’re to one another, with no exterior labels supplied.

    “This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”

    These forms of models, additionally known as neural networks, include 1000’s or thousands and thousands of processing models related to one another. Each node has connections of various strengths to different nodes in the community. As the community analyzes large quantities of knowledge, the strengths of these connections change as the community learns to carry out the desired job.

    As the mannequin performs a selected job, the exercise patterns of various models inside the community will be measured. Each unit’s exercise will be represented as a firing sample, much like the firing patterns of neurons in the brain. Previous work from Nayebi and others has proven that self-supervised models of imaginative and prescient generate exercise much like that seen in the visible processing system of mammalian brains.

    In each of the new NeurIPS research, the researchers got down to discover whether or not self-supervised computational models of different cognitive features may also present similarities to the mammalian brain. In the examine led by Nayebi, the researchers skilled self-supervised models to foretell the future state of their setting throughout tons of of 1000’s of naturalistic movies depicting on a regular basis situations.    

    “For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”

    Once the mannequin was skilled, the researchers had it generalize to a job they name “Mental-Pong.” This is much like the online game Pong, the place a participant strikes a paddle to hit a ball touring throughout the display. In the Mental-Pong model, the ball disappears shortly earlier than hitting the paddle, so the participant has to estimate its trajectory in an effort to hit the ball.

    The researchers discovered that the mannequin was capable of monitor the hidden ball’s trajectory with accuracy much like that of neurons in the mammalian brain, which had been proven in a earlier examine by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon often called “mental simulation.” Furthermore, the neural activation patterns seen inside the mannequin had been much like these seen in the brains of animals as they performed the sport — particularly, in part of the brain known as the dorsomedial frontal cortex. No different class of computational mannequin has been capable of match the organic information as carefully as this one, the researchers say.

    “There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”

    Navigating the world

    The examine led by Khona, Schaeffer, and Fiete centered on a kind of specialised neurons often called grid cells. These cells, positioned in the entorhinal cortex, assist animals to navigate, working along with place cells positioned in the hippocampus.

    While place cells hearth each time an animal is in a selected location, grid cells hearth solely when the animal is at one among the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of various sizes, which permits them to encode a lot of positions utilizing a comparatively small variety of cells.

    In latest research, researchers have skilled supervised neural networks to imitate grid cell operate by predicting an animal’s subsequent location based mostly on its start line and velocity, a job often called path integration. However, these models hinged on entry to privileged info about absolute area always — info that the animal doesn’t have.                               

    Inspired by the hanging coding properties of the multiperiodic grid-cell code for area, the MIT workforce skilled a contrastive self-supervised mannequin to each carry out this same path integration job and characterize area effectively whereas doing so. For the coaching information, they used sequences of velocity inputs. The mannequin discovered to differentiate positions based mostly on whether or not they had been related or totally different — close by positions generated related codes, however additional positions generated extra totally different codes.    

    “It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”

    Once the mannequin was skilled, the researchers discovered that the activation patterns of the nodes inside the mannequin shaped a number of lattice patterns with totally different durations, similar to these shaped by grid cells in the brain.

    “What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”

    The analysis was funded by the Okay. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Science

    Why virtual particles don’t exist but do explain reality – for now

    THE very first thing that you must know is that virtual particles, that are unimaginable…

    Technology

    The Economics of Lemonade Stands

    A number of weeks in the past my daughters and one of their associates had…

    Crypto

    Number Of Ethereum Short-Term Holders Increasing – Is A Bull Rally Next?

    Ethereum (ETH) has been exhibiting a strong efficiency currently, leaving traders each ecstatic and cautious.…

    Crypto

    Analyst Thinks Ethereum Will Explode To $15,000, Cites Favorable Technical Formation

    A crypto analyst, Elja on X, predicts that Ethereum (ETH) will attain a staggering $15,000 by…

    Mobile

    Top 10 trending phones of week 50

    Redmi’s entry degree 13C retains having fun with spectacular curiosity with our readers, topping our…

    Our Picks
    Technology

    Phone Keyboard Exploits Leave 1 Billion Users Exposed

    Technology

    “Netflix Houses” will open in 2025 where fans can immerse into the world of their favorite shows

    Crypto

    Is Ethereum Doomed? Whales Have Sold 12M ETH In Past Year

    Categories
    • AI (1,483)
    • Crypto (1,745)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,854)
    • Technology (1,790)
    • The Future (1,636)
    Most Popular
    The Future

    Your Kidneys Deserve Better — These 13 Superfoods Can Help

    Science

    Why dinosaur footprints inspired paleontologist Martin Lockley

    Technology

    Best Telescopes for Deep Space in 2023

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.