Close Menu
Ztoog
    What's Hot
    Science

    Quantum randomness of empty space can be controlled with a laser

    Mobile

    Leak reveals AI-powered One UI 6.1 features

    Crypto

    Ethereum Price Soars To Over $2,300

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Solving a machine-learning mystery | Ztoog
    AI

    Solving a machine-learning mystery | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Solving a machine-learning mystery | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Large language fashions like OpenAI’s GPT-3 are large neural networks that may generate human-like textual content, from poetry to programming code. Trained utilizing troves of web knowledge, these machine-learning fashions take a small little bit of enter textual content after which predict the textual content that’s more likely to come subsequent.

    But that’s not all these fashions can do. Researchers are exploring a curious phenomenon often called in-context studying, by which a giant language mannequin learns to perform a activity after seeing solely a few examples — even if it wasn’t skilled for that activity. For occasion, somebody might feed the mannequin a number of instance sentences and their sentiments (constructive or destructive), then immediate it with a new sentence, and the mannequin may give the right sentiment.

    Typically, a machine-learning mannequin like GPT-3 would should be retrained with new knowledge for this new activity. During this coaching course of, the mannequin updates its parameters because it processes new data to study the duty. But with in-context studying, the mannequin’s parameters aren’t up to date, so it looks like the mannequin learns a new activity with out studying something in any respect.

    Scientists from MIT, Google Research, and Stanford University are striving to unravel this mystery. They studied fashions which can be similar to giant language fashions to see how they will study with out updating parameters.

    The researchers’ theoretical outcomes present that these large neural community fashions are able to containing smaller, easier linear fashions buried inside them. The giant mannequin might then implement a easy studying algorithm to coach this smaller, linear mannequin to finish a new activity, utilizing solely data already contained inside the bigger mannequin. Its parameters stay fastened.

    An vital step towards understanding the mechanisms behind in-context studying, this analysis opens the door to extra exploration across the studying algorithms these giant fashions can implement, says Ekin Akyürek, a laptop science graduate scholar and lead creator of a paper exploring this phenomenon. With a higher understanding of in-context studying, researchers might allow fashions to finish new duties with out the necessity for pricey retraining.

    “Usually, if you wish to fine-tune these fashions, it’s worthwhile to acquire domain-specific knowledge and do some advanced engineering. But now we will simply feed it an enter, 5 examples, and it accomplishes what we wish. So, in-context studying is an unreasonably environment friendly studying phenomenon that must be understood,” Akyürek says.

    Joining Akyürek on the paper are Dale Schuurmans, a analysis scientist at Google Brain and professor of computing science on the University of Alberta; in addition to senior authors Jacob Andreas, the X Consortium Assistant Professor within the MIT Department of Electrical Engineering and Computer Science and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); Tengyu Ma, an assistant professor of laptop science and statistics at Stanford; and Danny Zhou, principal scientist and analysis director at Google Brain. The analysis might be introduced on the International Conference on Learning Representations.

    A mannequin inside a mannequin

    In the machine-learning analysis group, many scientists have come to imagine that giant language fashions can carry out in-context studying due to how they’re skilled, Akyürek says.

    For occasion, GPT-3 has lots of of billions of parameters and was skilled by studying large swaths of textual content on the web, from Wikipedia articles to Reddit posts. So, when somebody reveals the mannequin examples of a new activity, it has doubtless already seen one thing very related as a result of its coaching dataset included textual content from billions of internet sites. It repeats patterns it has seen throughout coaching, fairly than studying to carry out new duties.

    Akyürek hypothesized that in-context learners aren’t simply matching beforehand seen patterns, however as a substitute are literally studying to carry out new duties. He and others had experimented by giving these fashions prompts utilizing artificial knowledge, which they may not have seen anyplace earlier than, and located that the fashions might nonetheless study from simply a few examples. Akyürek and his colleagues thought that maybe these neural community fashions have smaller machine-learning fashions inside them that the fashions can prepare to finish a new activity.

    “That could explain almost all of the learning phenomena that we have seen with these large models,” he says.

    To take a look at this speculation, the researchers used a neural community mannequin referred to as a transformer, which has the identical structure as GPT-3, however had been particularly skilled for in-context studying.

    By exploring this transformer’s structure, they theoretically proved that it might probably write a linear mannequin inside its hidden states. A neural community consists of many layers of interconnected nodes that course of knowledge. The hidden states are the layers between the enter and output layers.

    Their mathematical evaluations present that this linear mannequin is written someplace within the earliest layers of the transformer. The transformer can then replace the linear mannequin by implementing easy studying algorithms.

    In essence, the mannequin simulates and trains a smaller model of itself.

    Probing hidden layers

    The researchers explored this speculation utilizing probing experiments, the place they appeared within the transformer’s hidden layers to try to get better a sure amount.

    “In this case, we tried to recover the actual solution to the linear model, and we could show that the parameter is written in the hidden states. This means the linear model is in there somewhere,” he says.

    Building off this theoretical work, the researchers could possibly allow a transformer to carry out in-context studying by including simply two layers to the neural community. There are nonetheless many technical particulars to work out earlier than that might be potential, Akyürek cautions, but it surely might assist engineers create fashions that may full new duties with out the necessity for retraining with new knowledge.

    “The paper sheds light on one of the most remarkable properties of modern large language models — their ability to learn from data given in their inputs, without explicit training. Using the simplified case of linear regression, the authors show theoretically how models can implement standard learning algorithms while reading their input, and empirically which learning algorithms best match their observed behavior,” says Mike Lewis, a analysis scientist at Facebook AI Research who was not concerned with this work. “These results are a stepping stone to understanding how models can learn more complex tasks, and will help researchers design better training methods for language models to further improve their performance.”

    Moving ahead, Akyürek plans to proceed exploring in-context studying with capabilities which can be extra advanced than the linear fashions they studied on this work. They might additionally apply these experiments to giant language fashions to see whether or not their behaviors are additionally described by easy studying algorithms. In addition, he desires to dig deeper into the sorts of pretraining knowledge that may allow in-context studying.

    “With this work, people can now visualize how these models can learn from exemplars. So, my hope is that it changes some people’s views about in-context learning,” Akyürek says. “These models are not as dumb as people think. They don’t just memorize these tasks. They can learn new tasks, and we have shown how that can be done.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Science

    Mars rover finds signs of ancient Martian river

    In its two years and three months of exploring the Red Planet, NASA’s Perseverance Rover…

    The Future

    Crypto exchange OKX ceases services in India

    Crypto exchange OKX is ceasing services for customers in India, it mentioned in an e…

    AI

    Generative AI imagines new protein structures | Ztoog

    Biology is a wondrous but delicate tapestry. At the center is DNA, the grasp weaver…

    Science

    Odd black holes smaller than protons may have once littered the cosmos

    Colour-charged black holes may have shaped in the early universebetibup33/Shutterstock The universe may have once…

    Technology

    Remembering Jung Uck Seo, Former IEEE Region 10 Director

    Jung Uck Seo, who served as 2003–2004 IEEE Region 10 director, died on 11 January…

    Our Picks
    The Future

    Hollywood’s writers’ strike might come to an end soon

    Science

    Ocean Temperatures Keep Shattering Records—and Stunning Scientists

    Science

    Two-faced star seems to have one hydrogen side and one helium side

    Categories
    • AI (1,493)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,867)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    Mobile

    5 Android apps you shouldn’t miss this week, and all the latest app news

    Mobile

    This isn’t Monopoly money: Google Play to host more RMG (real-money gaming) apps

    Mobile

    The best Galaxy Tab S9 deal known to man is back for a little while

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.