Close Menu
Ztoog
    What's Hot
    Science

    Humana also using AI tool with 90% error rate to deny care, lawsuit claims

    The Future

    Francis Ford Coppola’s Megalopolis is Finally Coming Out

    AI

    Outperforming larger language models with less training data and smaller model sizes – Google Research Blog

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

    Ztoog
    Home » How to Precisely Predict Your AI Model’s Performance Before Training Begins? This AI Paper from China Proposes Data Mixing Laws
    AI

    How to Precisely Predict Your AI Model’s Performance Before Training Begins? This AI Paper from China Proposes Data Mixing Laws

    Facebook Twitter Pinterest WhatsApp
    How to Precisely Predict Your AI Model’s Performance Before Training Begins? This AI Paper from China Proposes Data Mixing Laws
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    In giant language fashions (LLMs), the panorama of pretraining information is a wealthy mix of various sources. It spans from widespread English to much less widespread languages, together with informal conversations and scholarly texts, and even extends to modalities like photos and speeches. Within this combine, the information work together in advanced methods, typically aligning properly, diverging, and sometimes conflicting. The problem lies in fine-tuning the proportions of this combine, leveraging the strengths of every area whereas minimizing potential conflicts by way of which the ensuing fashions achieve enhanced capabilities, a testomony to the dear insights gained from intensive real-world use.

    Despite being elusive in determining a perfect coaching information combination, most present practices tune the combination by way of heuristics to upsample a proportion of high-quality or underrepresented information with out disclosing the concrete standards intimately. Predicting whether or not these information methods are efficient earlier than ending the coaching run is tough. Inspired by developments in scaling legal guidelines that present mannequin losses on a given set of analysis information are quantitatively predictable for a variety of variables, there’s an thrilling prospect. If this precept additionally applies to combination proportions, they might estimate the efficiency of the ensuing mannequin earlier than even commencing coaching.

    Researchers from Fudan University and Shanghai AI Laboratory launched information mixing legislation and prediction pipeline, which solves the issue of precisely predicting the validation loss for a combination of coaching domains underneath a hard and fast mannequin measurement and quantity of coaching information. Researchers carried out a Pilot Study on Domain Losses underneath Two-domain Mixtures to predict mannequin losses concerning information mixtures. This is achieved by coaching 70M and 160M language fashions on the combination of Github and Pile-CC subsets from the Pile dataset with 5 completely different combination proportions for Github. All the fashions are skilled with a batch measurement of 1M tokens for 30k steps, which is 30B tokens.

    This paper addresses varied challenges in optimizing information mixtures. Some of them are (a) Discovery of quantitative predictability of mannequin efficiency concerning information combination, summarizing this right into a useful relationship, particularly the information mixing legal guidelines. (b) Proposed a pipeline to predict the mannequin efficiency of large-scale coaching on completely different combination proportions however solely experiments on small fashions with few coaching information by way of nested scaling legal guidelines of coaching steps, mannequin sizes, and information mixing legal guidelines. (c) Experimental verification of the reliability of knowledge mixing legal guidelines and prediction pipeline, displaying its effectiveness in optimizing mannequin efficiency, balancing mannequin capabilities, and the prospects of guiding the design of the information schedule.

    Developing a pipeline for loss prediction concerned coaching the fashions on the combination of RedPajama and validating in opposition to the validation set of the Pile. A collection of 70M, 160M, 305M, and 410M fashions for 30B tokens have been skilled to adhere to the scaling legal guidelines of coaching steps and mannequin sizes. Remarkably, the mannequin skilled on the optimized combination achieves efficiency comparable to that of 1 skilled on the default combination, however with simply 73% of the steps. It finally surpasses the default combination’s efficiency, requiring 48% extra steps, underscoring the pipeline’s effectiveness in combination optimization.

    In conclusion, this paper introduces information mixing legislation and prediction pipeline, which solves the issue of precisely predicting the validation loss for a combination of coaching domains underneath a hard and fast mannequin measurement and quantity of coaching information. The nested use of scaling legal guidelines of coaching steps, mannequin sizes, and information combination makes predictions with solely experiments at small scales, enabling the reuse of present experiments and lowering computation prices. This examine will additional facilitate quantitative research and theoretical evaluation with an growing concentrate on information engineering.


    Check out the Paper and Github. All credit score for this analysis goes to the researchers of this venture. Also, don’t overlook to comply with us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you want our work, you’ll love our e-newsletter..

    Don’t Forget to be a part of our 39k+ ML SubReddit


    Sajjad Ansari is a last yr undergraduate from IIT Kharagpur. As a Tech fanatic, he delves into the sensible purposes of AI with a concentrate on understanding the influence of AI applied sciences and their real-world implications. He goals to articulate advanced AI ideas in a transparent and accessible method.


    🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and plenty of others…

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Mobile

    OnePlus Nord CE4 hands-on review

    Introduction After the flagship OnePlus 12, the corporate is shifting its consideration to the opposite…

    AI

    IBM Researchers Introduce an Analog AI Chip for Deep Learning Inference: Showcasing Critical Building Blocks of a Scalable Mixed-Signal Architecture

    The ongoing AI revolution, set to reshape existence and workplaces, has seen deep neural networks…

    Technology

    Best Peloton Alternatives for 2023

    $476 at Amazon Echelon Smart Connect Bike EX3 Best offers on an indoor bike $1,399…

    Gadgets

    Apple and Google Collaborate In Unwanted Tracking Alerts For All Item Trackers

    In response to considerations about privateness and safety, Apple and Google have introduced a partnership…

    The Future

    8 Best Foods to Boost Happiness, According to Science

    We spend a number of our lives searching for happiness. Whether it is making an attempt…

    Our Picks
    The Future

    Twitter’s new CEO rumored to be NBC’s Linda Yaccarino

    Technology

    Video Friday: GR-1 – IEEE Spectrum

    Gadgets

    How to Opt Out of Comcast’s Xfinity Storing Your Sensitive Data

    Categories
    • AI (1,560)
    • Crypto (1,827)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Science

    Strange ‘magic islands’ on Saturn’s moon Titan may be porous iceberg

    The Future

    4 Gmail App Alternatives for Smoother Communication

    Crypto

    Chinese Miners Flock To Ethiopia For Cool Climate, Cheap Electricity And Hot Profits

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.