Close Menu
Ztoog
    What's Hot
    Mobile

    Fairphone is working on a cheap, widely available Android phone

    Crypto

    Dormant Bitcoin Whales Rouse From Slumber To Threaten BTC Rally

    AI

    OpenAI Introduces Sora: The Future of Video Generation with AI

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

      Bitcoin Trades Below ETF Cost-Basis As MVRV Signals Mounting Pressure

    Ztoog
    Home » A New AI Research from Apple and Equall AI Uncovers Redundancies in Transformer Architecture: How Streamlining the Feed Forward Network Boosts Efficiency and Accuracy
    AI

    A New AI Research from Apple and Equall AI Uncovers Redundancies in Transformer Architecture: How Streamlining the Feed Forward Network Boosts Efficiency and Accuracy

    Facebook Twitter Pinterest WhatsApp
    A New AI Research from Apple and Equall AI Uncovers Redundancies in Transformer Architecture: How Streamlining the Feed Forward Network Boosts Efficiency and Accuracy
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Transformer design that has not too long ago develop into widespread has taken over as the customary methodology for Natural Language Processing (NLP) actions, notably Machine Translation (MT). This structure has displayed spectacular scaling qualities, which implies that including extra mannequin parameters outcomes in higher efficiency on a wide range of NLP duties. A variety of research and investigations have validated this statement. Though transformers excel in phrases of scalability, there’s a parallel motion to make these fashions more practical and deployable in the actual world. This entails caring for points with latency, reminiscence use, and disc house.

    Researchers have been actively investigating strategies to handle these points, together with element trimming, parameter sharing, and dimensionality discount. The broadly utilized Transformer structure includes numerous important elements, of which two of the most vital ones are the Feed Forward Network (FFN) and Attention.

    1. Attention – The Attention mechanism permits the mannequin to seize relationships and dependencies between phrases in a sentence, regardless of their positions. It capabilities as a form of mechanism to assist the mannequin in figuring out which parts of the enter textual content are most pertinent to every phrase it’s presently analyzing. Understanding the context and connections between phrases in a phrase is dependent upon this.
    1. Feed Forward Network (FFN): The FFN is answerable for non-linearly reworking every enter token independently. It provides complexity and expressiveness to the mannequin’s comprehension of every phrase by performing particular mathematical operations on the illustration of every phrase.

    In latest analysis, a crew of researchers has centered on investigating the function of the FFN inside the Transformer structure. They have found that the FFN reveals a excessive stage of redundancy whereas being a big element of the mannequin and consuming a major variety of parameters. They have discovered that they might reduce the mannequin’s parameter rely with out considerably compromising accuracy. They have achieved this by eradicating the FFN from the decoder layers and as a substitute utilizing a single shared FFN throughout the encoder layers.

    1. Decoder Layers: Each encoder and decoder in an ordinary Transformer mannequin has its personal FFN. The researchers eradicated the FFN from the decoder layers.
    1. Encoder Layers: They used a single FFN that was shared by all of the encoder layers somewhat than having particular person FFNs for every encoder layer.

    The researchers have shared the advantages which have accompanied this method, that are as follows.

    1. Parameter Reduction: They drastically decreased the quantity of parameters in the mannequin by deleting and sharing the FFN elements.
    1. The mannequin’s accuracy solely decreased by a modest quantity regardless of eradicating a large variety of its parameters. This exhibits that the encoder’s quite a few FFNs and the decoder’s FFN have some extent of useful redundancy.
    1. Scaling Back: They expanded the hidden dimension of the shared FFN to revive the structure to its earlier measurement whereas sustaining and even enhancing the efficiency of the mannequin. Compared to the earlier large-scale Transformer mannequin, this resulted in appreciable enhancements in accuracy and mannequin processing pace, i.e., latency.

    In conclusion, this analysis exhibits that the Feed Forward Network in the Transformer design, particularly in the decoder ranges, could also be streamlined and shared with out considerably affecting mannequin efficiency. This not solely lessens the mannequin’s computational load but additionally improves its effectiveness and applicability for various NLP purposes.


    Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t overlook to hitch our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, the place we share the newest AI analysis information, cool AI initiatives, and extra.

    If you want our work, you’ll love our e-newsletter..


    Tanya Malhotra is a closing yr undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
    She is a Data Science fanatic with good analytical and crucial considering, together with an ardent curiosity in buying new expertise, main teams, and managing work in an organized method.


    🚀 Check out Noah AI: ChatGPT with Hundreds of Your Google Drive Documents, Spreadsheets, and Presentations (Sponsored)

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Science

    Weird white dwarf star has a metal scar after eating a planet

    An artist’s impression of WD 0816-310, the place astronomers have discovered a scar imprinted on…

    Gadgets

    The best photo-editing software for 2023

    We might earn income from the merchandise out there on this web page and take…

    The Future

    Major US media outlets see steep decline in engagement on Meta

    Recent evaluation has revealed a noticeable decline in person engagement for the 25 most-cited information…

    AI

    Colossal-AI Team Open-Sources SwiftInfer: A TensorRT-Based Implementation of the StreamingLLM Algorithm

    The Colossal-AI workforce has open-sourced Swiftlnfer, a TensorRT-based implementation of the StreamingLLM algorithm. The StreamingLLM…

    Science

    Mouse embryos have been grown in space for the first time

    A microscope picture of mouse embryos after they’d returned from the International Space StationTeruhiko Wakayama/University…

    Our Picks
    Technology

    Best Solar Panel Installation Companies in Washington, DC

    Mobile

    News Weekly: First look at RCS on iPhone, YouTube cracks down on VPN hacks, Android 15, and more

    Science

    How to trap cosmic rays in a jar at home

    Categories
    • AI (1,560)
    • Crypto (1,826)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    AI

    20+ Best AI Tools For Startups (2023)

    Mobile

    Realme C51 lands in India

    Gadgets

    Just 4 days and 10 tables left to exhibit at Disrupt 2025

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.