Close Menu
Ztoog
    What's Hot
    Crypto

    Bitcoin ETFs See Continued Inflows Despite Pre-Halving Turbulence

    Crypto

    Litecoin Whale Deposits Big To Binance, LTC’s 3% Drop To Extend?

    Crypto

    Bitcoin Whales Increase Their Holdings By $3 Billion

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Today’s NYT Connections Hints, Answers for May 12, #701

      OPPO launches A5 Pro 5G: Premium features at a budget price

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

    • Technology

      Today’s NYT Wordle Hints, Answer and Help for May 12, #1423

      What It Is and Why It Matters—Part 1 – O’Reilly

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

    • Gadgets

      Google Tests Automatic Password-to-Passkey Conversion On Android

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

    • Mobile

      Motorola’s Moto Watch needs to start living up to the brand name

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

    • Science

      Nothing is stronger than quantum connections – and now we know why

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

    • AI

      Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

    • Crypto

      Ethereum Breaks Key Resistance In One Massive Move – Higher High Confirms Momentum

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

    Ztoog
    Home » Meet AutoGPTQ: An Easy-to-Use LLMs Quantization Package with User-Friendly APIs based on GPTQ Algorithm
    AI

    Meet AutoGPTQ: An Easy-to-Use LLMs Quantization Package with User-Friendly APIs based on GPTQ Algorithm

    Facebook Twitter Pinterest WhatsApp
    Meet AutoGPTQ: An Easy-to-Use LLMs Quantization Package with User-Friendly APIs based on GPTQ Algorithm
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Researchers from Hugging Face have launched an progressive answer to deal with the challenges posed by the resource-intensive calls for of coaching and deploying massive language fashions (LLMs). Their newly built-in AutoGPTQ library within the Transformers ecosystem permits customers to quantize and run LLMs utilizing the GPTQ algorithm.

    In pure language processing, LLMs have reworked varied domains by their potential to grasp and generate human-like textual content. However, the computational necessities for coaching and deploying these fashions have posed important obstacles. To sort out this, the researchers built-in the GPTQ algorithm, a quantization approach, into the AutoGPTQ library. This development permits customers to execute fashions in diminished bit precision – 8, 4, 3, and even 2 bits – whereas sustaining negligible accuracy degradation and comparable inference pace to fp16 baselines, particularly for small batch sizes.

    GPTQ, categorized as a Post-Training Quantization (PTQ) methodology, optimizes the trade-off between reminiscence effectivity and computational pace. It adopts a hybrid quantization scheme the place mannequin weights are quantized as int4, whereas activations are retained in float16. Weights are dynamically dequantized throughout inference, and precise computation is carried out in float16. This method brings reminiscence financial savings attributable to fused kernel-based dequantization and potential speedups by diminished knowledge communication time.

    The researchers tackled the problem of layer-wise compression in GPTQ by leveraging the Optimal Brain Quantization (OBQ) framework. They developed optimizations that streamline the quantization algorithm whereas sustaining mannequin accuracy. Compared to conventional PTQ strategies, GPTQ demonstrated spectacular enhancements in quantization effectivity, lowering the time required for quantizing massive fashions.

    Integration with the AutoGPTQ library simplifies the quantization course of, permitting customers to leverage GPTQ for varied transformer architectures simply. With native help within the Transformers library, customers can quantize fashions with out complicated setups. Notably, quantized fashions retain their serializability and shareability on platforms just like the Hugging Face Hub, opening avenues for broader entry and collaboration.

    The integration additionally extends to the Text-Generation-Inference library (TGI), enabling GPTQ fashions to be deployed effectively in manufacturing environments. Users can harness dynamic batching and different superior options alongside GPTQ for optimum useful resource utilization.

    While the AutoGPTQ integration presents important advantages, the researchers acknowledge room for additional enchancment. They spotlight the potential for enhancing kernel implementations and exploring quantization methods encompassing weights and activations. The integration presently focuses on decoder or encoder-only architectures in LLMs, limiting its applicability to sure fashions.

    In conclusion, integrating the AutoGPTQ library in Transformers by Hugging Face addresses resource-intensive LLM coaching and deployment challenges. By introducing GPTQ quantization, the researchers provide an environment friendly answer that optimizes reminiscence consumption and inference pace. The integration’s large protection and user-friendly interface signify a step towards democratizing entry to quantized LLMs throughout totally different GPU architectures. As this subject continues to evolve, the collaborative efforts of researchers within the machine-learning group maintain promise for additional developments and improvements.


    Check out the Paper, Github and Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t neglect to hitch our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, the place we share the most recent AI analysis information, cool AI tasks, and extra.

    LLMs simply received quicker and lighter with 🤗 Transformers x AutoGPTQ !

    You can now load your fashions from @huggingface with GPTQ quantization. Enjoy quicker inference pace and decrease reminiscence utilization than present supported quantization schemes 🚀

    Blogpost: https://t.co/vizRr9Ssxa

    — Marc Sun (@_marcsun) August 23, 2023


    Niharika is a Technical consulting intern at Marktechpost. She is a 3rd 12 months undergraduate, presently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a extremely enthusiastic particular person with a eager curiosity in Machine studying, Data science and AI and an avid reader of the most recent developments in these fields.


    🚀 CodiumAI permits busy builders to generate significant assessments (Sponsored)

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    The Best Pickleball Paddles, Tested and Reviewed (2024)

    What measurement paddle do you want? Pickleball paddles are restricted by a measurement formulation much…

    Mobile

    Dark mode is a lie

    The warfare between gentle and darkish mode themes on telephones has lengthy relied on private…

    Mobile

    Famous musician stars in Google’s new Pixel ad

    One of essentially the most helpful options discovered on Pixel fashions for the reason that…

    Gadgets

    The Pixel 9 might come with exclusive “Pixie” AI assistant

    (*9*) Move over Google Assistant, Google is seemingly engaged on a brand new AI. The…

    Mobile

    Realme C51 lands in India

    Realme introduced its C51 in Taiwan again in July and the cellphone has now made…

    Our Picks
    Gadgets

    34 Best Sleep Week Deals: Mattresses, Sheets, and Sleep Accessories

    AI

    Bridging the expectation-reality gap in machine learning

    AI

    Top AI Tools for Data Analysts 2023

    Categories
    • AI (1,483)
    • Crypto (1,745)
    • Gadgets (1,797)
    • Mobile (1,840)
    • Science (1,854)
    • Technology (1,791)
    • The Future (1,637)
    Most Popular
    Science

    The Plan to Put Pig Genes in Soy Beans for Tastier Fake Meat

    Crypto

    Bitcoin Breaches $52,000, Reclaiming $1 Trillion Market Cap

    Science

    More than half of Americans plan to get updated COVID shot

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.