Close Menu
Ztoog
    What's Hot
    The Future

    Telstra ramps up the fight against scammers and spammers

    The Future

    Armies of bots battled on Twitter over Chinese spy balloon incident

    Crypto

    Filecoin (FIL) Surges Another 9.3%, Are The Bulls Getting Ready?

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

      India-Pak conflict: Pak appoints ISI chief, appointment comes in backdrop of the Pahalgam attack

    • Technology

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

      The more Google kills Fitbit, the more I want a Fitbit Sense 3

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

      ‘Dark photon’ theory of light aims to tear up a century of physics

    • AI

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

      The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    • Crypto

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

      Speak at Ztoog Disrupt 2025: Applications now open

    Ztoog
    Home » Mixed-input matrix multiplication performance optimizations – Google Research Blog
    AI

    Mixed-input matrix multiplication performance optimizations – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Mixed-input matrix multiplication performance optimizations – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Manish Gupta, Staff Software Engineer, Google Research

    AI-driven applied sciences are weaving themselves into the material of our each day routines, with the potential to reinforce our entry to information and enhance our total productiveness. The spine of those functions lies in giant language fashions (LLMs). LLMs are memory-intensive and sometimes require specialised {hardware} accelerators to effectively ship tens of exaflops of computing energy. This weblog publish exhibits how we will begin addressing the computational challenges by using reminiscence extra successfully.

    The bulk of an LLM’s reminiscence and compute are consumed by weights in matrix multiplication operations. Using narrower information sorts reduces reminiscence consumption. For instance, storing weights within the 8-bit integer (i.e., U8 or S8) information sort reduces the reminiscence footprint by 4× relative to single-precision (F32) and a pair of× relative to half-precision (F16) or bfloat16 (BF16). Furthermore, earlier work has proven that LLM fashions operating matrix multiplications with weights in S8 and enter in F16 (preserving larger precision of the user-input) is an efficient technique for growing the effectivity with acceptable trade-offs in accuracy. This method is called weight-only quantization and requires environment friendly implementation of matrix multiplication with mixed-inputs, e.g., half-precision enter multiplied with 8-bits integer. Hardware accelerators, together with GPUs, assist a hard and fast set of information sorts, and thus, mixed-input matrix multiplication requires software program transformations to map to the {hardware} operations.

    To that finish, on this weblog we concentrate on mapping mixed-input matrix multiplication onto the NVIDIA Ampere structure. We current software program methods addressing information sort conversion and structure conformance to map mixed-input matrix multiplication effectively onto hardware-supported information sorts and layouts. Our outcomes present that the overhead of extra work in software program is minimal and allows performance near the height {hardware} capabilities. The software program methods described listed here are launched within the open-source NVIDIA/CUTLASS repository.

    Memory footprint for an 175B parameter LLM mannequin with varied information sorts codecs.

    The matrix-multiply-accumulate operation

    Modern AI {hardware} accelerators corresponding to Google’s TPU and NVIDIA’s GPU multiply matrices natively within the {hardware} by concentrating on Tensor Cores, that are specialised processing parts to speed up matrix operations, significantly for AI workloads. In this weblog, we concentrate on NVIDIA Ampere Tensor Cores, which offer the matrix-multiply-accumulate (mma) operation. For the remainder of the weblog the reference to mma is for Ampere Tensor Cores. The supported information sorts, shapes, and information structure of the 2 enter matrices (known as operands) for the mma operation are fastened in {hardware}. This implies that matrix multiplications with varied information sorts and bigger shapes are applied within the software program by tiling the issue onto hardware-supported information sorts, shapes, and layouts.

    The Tensor Core mma operation is outlined by specifying two enter matrices (e.g., A & B, proven beneath) to supply a end result matrix, C. The mma operation natively helps mixed-precision. Mixed-precision Tensor Cores permit mixing enter (A and B) information sort with the end result (C) information sort. In distinction, mixed-input matrix multiplication entails mixing the enter information sorts, and it isn’t supported by the {hardware}, so it must be applied within the software program.

    Tensor Core operation of M-by-N-by-Ok on enter matrix A of M-by-Ok and matrix B of Ok-by-N produces output matrix C of M-by-N.

    Challenges of mixed-input matrix multiplication

    To simplify the dialogue, we prohibit to a selected instance of mixed-input matrix multiplication: F16 for person enter and U8 for the mannequin weights (written as F16 * U8). The methods described right here work for varied combos of mixed-input information sorts.

    A GPU programmer can entry a hierarchy of reminiscence, together with world reminiscence, shared reminiscence, and registers, that are organized so as of lowering capability however growing velocity. NVIDIA Ampere Tensor Core mma operations eat enter matrices from registers. Furthermore, enter and output matrices are required to evolve to a structure of information inside a bunch of 32 threads often called a warp. The supported information sort and structure inside a warp are fastened for an mma operation, so to implement mixed-input multiplication effectively, it’s crucial to unravel the challenges of information sort conversion and structure conformance in software program.

    Data sort conversion

    The mma operation requires two enter matrices with the identical information sort. Thus, mixed-input matrix multiplication, the place one of many operands is saved in U8 in world reminiscence and different in F16, requires a knowledge sort conversion from U8 to F16. The conversion will convey two operands to F16, mapping the mixed-input matrix multiplication to hardware-supported mixed-precision Tensor Cores. Given the big variety of weights, there are a lot of such operations, and our methods present find out how to scale back their latency and enhance performance.

    Layout conformance

    The mma operation additionally requires the structure of two enter matrices, inside the registers of a warp, to be conformat with {hardware} specification. The structure for the enter matrix B of U8 information sort in mixed-input matrix multiplication (F16 * U8) wants to evolve with the transformed F16 information sort. This known as structure conformance and must be achieved within the software program.

    The determine beneath exhibits an mma operation consuming matrix A and matrix B from registers to supply matrix C in registers, distributed throughout one warp. The thread T0 is highlighted and zoomed in to point out the load matrix B goes by information sort conversion and wishes a structure conformance to have the ability to map to the hardware-supported Tensor Core operation.

    Software methods addressing challenges

    A typical information sort conversion entails a sequence of operations on 32-bit registers, proven beneath. Each rectangular block represents a register and the adjoining textual content are the operations. The complete sequence exhibits the conversion from 4xU8 to 2x(2xF16). The sequence entails roughly 10 operations.

    There are some ways of reaching structure conformance. Two of the prevailing options are:

    1. Narrower bitwidth shared reminiscence hundreds: In this method, threads subject slender bitwidth reminiscence hundreds shifting the U8 information from shared reminiscence to registers. This leads to two 32-bit registers, with every register containing 2xF16 values (proven above for the matrix B’s thread T0). The narrower shared reminiscence load achieves structure conformance instantly into registers with no need any shuffles; nonetheless, it doesn’t make the most of the total shared reminiscence bandwidth.
    2. Pre-processing in world reminiscence: An different technique entails rearranging the info inside the world reminiscence (one degree above the shared reminiscence in reminiscence hierarchy), permitting wider shared reminiscence hundreds. This method maximizes the shared reminiscence bandwidth utilization and ensures that the info is loaded in a conformant structure instantly within the registers. Although the rearrangement course of may be executed offline previous to the LLM deployment, making certain no impression on the applying performance, it introduces an extra, non-trivial hardware-specific pre-processing step that requires an additional program to rearrange the info. NVIDIA/FasterTransformer adopts this technique to successfully deal with structure conformance challenges.

    Optimized software program methods

    To additional optimize and scale back the overhead of information sort conversion and structure conformance, now we have applied FastNumericArrayConvertor and FragmentShuffler, respectively.

    FastNumericArrayConvertor operates on 4xU8 in 32-bit registers with out unpacking particular person 1xU8 values. Furthermore, it makes use of inexpensive arithmetic operations which reduces the variety of directions and will increase the velocity of the conversion.

    The conversion sequence for U8-to-F16 is proven beneath. The operations use packed 32b registers, avoiding specific unpacking and packing. FastNumericArrayConvertor makes use of the permute byte to rearrange bytes of 4xU8 into two registers. Additionally, FastNumericArrayConvertor doesn’t use costly integer to floating-point conversion directions and employs vectorized operations to acquire the packed leads to two 32-bit registers containing 2x(2xF16) values. The FastNumericArrayConvertor for U8-to-F16 roughly makes use of six operations, a 1.6× discount relative to the method proven above.

    FastNumericArrayConvertor makes use of permute bytes and packed arithmetic, decreasing the variety of directions within the information sort conversion.

    FragmentShuffler handles the structure conformance by shuffling information in a method that permits the usage of wider bitwidth load operation, growing shared reminiscence bandwidth utilization and decreasing the entire variety of operations.

    NVIDIA Ampere structure gives a load matrix instruction (ldmatrix). The ldmatrix is a warp-level operation, the place 32 threads of a warp transfer the info from shared reminiscence to registers within the form and structure that mma matrix A and B eat. The use of ldmatrix reduces the variety of load directions and will increase the reminiscence bandwidth utilization. Since the ldmatrix instruction strikes U8 information to registers, the structure after the load conforms with U8*U8 mma operation, and never with F16*F16 mma operation. We applied FragmentShuffler to rearrange the info inside registers utilizing shuffle (shfl.sync) operations to attain the structure conformance.

    The most vital contribution of this work is to attain structure conformance by register shuffles, avoiding offline pre-processing in world reminiscence or narrower bitwidth shared reminiscence hundreds. Furthermore, we offer implementations for FastNumericArrayConvertor overlaying information sort conversion from U8-to-F16, S8-to-F16, U8-to-BF16, and S8-to-BF16.

    Performance outcomes

    We measured the performance of eight mixed-input variants of our technique (proven beneath in blue and purple; various the info forms of matrix A and B) and two mixed-precision information sorts (proven in inexperienced) on an NVIDIA A100 SXM chip. The performance outcomes are proven in FLOPS (larger is healthier). Notably, the primary eight matrix-multipications require extra operations relative to the final two, as a result of the mixed-precision variants instantly goal hardware-accelerated Tensor Core operations and don’t want information sort conversion and structure conformance. Even so, our method demonstrates mixed-input matrix multiplication performance solely barely beneath or on par with mixed-precision.

    Mixed-input matrix multiplication performance on NVIDIA A100 40GB SMX4 chip for a compute-bound matrix downside form m=3456, n=4096, ok=2048.

    Acknowledgements

    We wish to point out a number of of us who’ve contributed by technical brainstorming and enhancing the weblog publish together with, Quentin Colombet, Jacques Pienaar, Allie Culp, Calin Cascaval, Ashish Gondimalla, Matt Walsh, Marek Kolodziej, and Aman Bhatia. We wish to thank our NVIDIA companions Rawn Henry, Pradeep Ramani, Vijay Thakkar, Haicheng Wu, Andrew Kerr, Matthew Nicely, and Vartika Singh.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    AI

    “Periodic table of machine learning” could fuel AI discovery | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Mobile

    Apple Pay Later will appear on Experian credit reports starting today

    Apple Pay Later is a well-liked purchase now, pay later (BNPL) fee service that many…

    Technology

    Who Are Elon Musk’s Friends, Investors and Family?

    Peter ThielLarry EllisonJoe RoganMaye MuskGwynne ShotwellStephen MillerNelson PeltzDonald Trump A take a look at the…

    Science

    What’s next for Boeing’s struggling Starliner spacecraft

    Boeing’s Starliner spacecraft was imagined to depart Earth final month in a crewed check flight…

    Technology

    Gary Gensler says it's "nearly unavoidable" that AI will cause a financial crisis if many institutions rely on the same underlying base model or data aggregator (Financial Times)

    Financial Times: Gary Gensler says it is “almost unavoidable” that AI will cause a financial…

    The Future

    Amazon launches free channels, check marks come to Gmail and OpenAI raises more moolah

    It’s that point of week once more, people — Week in Review (WiR) time. For…

    Our Picks
    Mobile

    News Weekly: The new 2024 Motorola Razr, early Pixel 9 event confirmed, and more

    Gadgets

    The most interesting keyboard released this week is also the cheapest

    The Future

    I Tried Jane Fonda’s New VR Workouts, and It’s Clear the Fitness Icon Hasn’t Lost Her Groove

    Categories
    • AI (1,482)
    • Crypto (1,744)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,853)
    • Technology (1,789)
    • The Future (1,635)
    Most Popular
    Technology

    Moto Tag is Motorola’s new Android alternative to the Apple AirTag

    Mobile

    Wallpaper Wednesday: Android wallpapers 2023-09-27

    The Future

    Best Cellphone Plans of 2023: Our Top Picks for June

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.