Close Menu
Ztoog
    What's Hot
    Mobile

    Apple’s behind-the-scenes video of the Super Bowl halftime show recorded by iPhone 15 Pro line

    AI

    This AI Paper from Meta AI Explores Advanced Refinement Strategies: Unveiling the Power of Stepwise Outcome-based and Process-based Reward Models

    Gadgets

    “AI took my job, literally”—Gizmodo fires Spanish staff amid switch to AI translator

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

      Bitcoin Trades Below ETF Cost-Basis As MVRV Signals Mounting Pressure

    Ztoog
    Home » This Paper Introduces AQLM: A Machine Learning Algorithm that Helps in the Extreme Compression of Large Language Models via Additive Quantization
    AI

    This Paper Introduces AQLM: A Machine Learning Algorithm that Helps in the Extreme Compression of Large Language Models via Additive Quantization

    Facebook Twitter Pinterest WhatsApp
    This Paper Introduces AQLM: A Machine Learning Algorithm that Helps in the Extreme Compression of Large Language Models via Additive Quantization
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    In the quickly advancing area of synthetic intelligence, the environment friendly operation of giant language fashions (LLMs) on consumer-level {hardware} represents a big technical problem. This situation arises from the inherent trade-off between the fashions’ dimension and computational effectivity. Compression strategies, together with direct and multi-codebook quantization (MCQ), have supplied partial options to reduce these AI behemoths’ reminiscence necessities. However, these approaches typically compromise mannequin efficiency, leaving a niche for innovation in excessive mannequin compression strategies.

    A pioneering technique referred to as Additive Quantization for Language Models (AQLM) by researchers from HSE University, Yandex Research, Skoltech, IST Austria, and NeuralMagic targeted on minimizing this trade-off goal by lowering the bit rely per mannequin parameter to an astonishingly low vary of 2 to three bits. This technique adopts and refines additive quantization, a way beforehand confined to info retrieval for the particular challenges of LLM compression. 

    AQLM distinguishes itself by preserving and, in some situations, enhancing the accuracy of compressed fashions, significantly in eventualities demanding excessive compression. This is achieved by means of a novel two-pronged strategy that contains the realized additive quantization of weight matrices in a fashion that adapts to enter variability and a classy joint optimization of codebook parameters throughout layer blocks. This twin technique propels AQLM to the forefront of LLM compression applied sciences, setting new requirements in the discipline.

    One of the standout options of AQLM is its sensible applicability throughout varied {hardware} platforms. The researchers behind AQLM have offered implementations demonstrating the technique’s effectiveness on GPU and CPU architectures, making certain its utility in real-world functions. This practicality is underpinned by an in depth analysis of up to date compression strategies, the place AQLM persistently surpasses its opponents. It shines particularly in excessive compression settings, demonstrating a exceptional capability to reduce mannequin dimension with out degrading efficiency. This is evidenced by AQLM’s superior efficiency in metrics comparable to mannequin perplexity and accuracy in zero-shot duties, highlighting its effectivity in sustaining the integrity of the compressed mannequin.

    The comparative evaluation of AQLM in opposition to different main compression methodologies reveals its distinctive place in the panorama of LLM compression. Unlike different approaches that typically require a compromise between mannequin dimension and accuracy, AQLM maintains or improves efficiency throughout a spectrum of metrics. This benefit is especially evident in excessive compression, the place AQLM units new benchmarks in effectivity and effectiveness. The technique’s success in this area is a testomony to the modern strategy taken by the researchers, combining realized additive quantization with joint optimization strategies to attain unparalleled outcomes.

    In conclusion, AQLM emerges as a groundbreaking strategy in the quest for environment friendly compression of LLMs. By addressing the crucial problem of lowering the mannequin dimension with out sacrificing accuracy, AQLM paves the means for deploying superior AI capabilities on a broader array of units. Its modern use of additive quantization tailor-made to LLMs and the technique’s sensible implementations on varied {hardware} platforms mark a big development in making AI extra accessible. The spectacular efficiency of AQLM, validated by means of rigorous evaluations, positions it as a beacon of innovation in LLM compression.


    Check out the Paper and Github. All credit score for this analysis goes to the researchers of this mission. Also, don’t overlook to comply with us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you want our work, you’ll love our publication..

    Don’t Forget to hitch our 38k+ ML SubReddit


    Muhammad Athar Ganaie, a consulting intern at MarktechPost, is a proponet of Efficient Deep Learning, with a deal with Sparse Training. Pursuing an M.Sc. in Electrical Engineering, specializing in Software Engineering, he blends superior technical data with sensible functions. His present endeavor is his thesis on “Improving Efficiency in Deep Reinforcement Learning,” showcasing his dedication to enhancing AI’s capabilities. Athar’s work stands at the intersection “Sparse Training in DNN’s” and “Deep Reinforcemnt Learning”.


    🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and lots of others…

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    The Future

    Why are weather forecasting apps so terrible?

    Rain? Or shine? Why do the apps get it wrong so often?Rob Watkins/Alamy If you…

    The Future

    6 Types of Cryptocurrency Scams and How to Avoid Them

    Cryptocurrency scams have proliferated alongside the rising recognition of digital belongings. These scams are available…

    Mobile

    Qualcomm feels the effects of the declining smartphone market

    What you want to knowQualcomm introduced its fiscal yr Q3 2023 monetary earnings on Wednesday.The…

    Crypto

    FTX Estate Stakes $122 Million Solana (SOL), Quells Fear Of Token Sell-Off

    The FTX property has reportedly staked 5.5 million Solana (SOL), value $122 million. This improvement…

    Gadgets

    Cruise Autonomous Vehicle Gets Stuck in Wet Concrete In San Francisco

    A Cruise self-driving automobile confronted an surprising setback when it grew to become trapped in…

    Our Picks
    Technology

    Radar Trends to Watch: July 2023 – O’Reilly

    Gadgets

    The best 3D printer resin in 2023

    Science

    Protective vaccination rates falling out of reach in US; exemptions hit record

    Categories
    • AI (1,560)
    • Crypto (1,826)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Science

    A weird cloud forms on Mars each year and now we know why

    Gadgets

    Lenovo Slim Pro 7 Review (2023): A Powerful, Light AMD Laptop

    Science

    Mosquitoes Could Help Prevent Helicopter and Drone Accidents

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.