Close Menu
Ztoog
    What's Hot
    AI

    Moltbook was peak AI theater

    Crypto

    Can Ethereum Touch $4,000? Crypto Analyst Says ETH Rally Far From Over

    Gadgets

    ‘Diablo IV’, ‘Star Wars’, and More | WIRED

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

      Bitcoin Trades Below ETF Cost-Basis As MVRV Signals Mounting Pressure

    Ztoog
    Home » Can Benign Data Undermine AI Safety? This Paper from Princeton University Explores the Paradox of Machine Learning Fine-Tuning
    AI

    Can Benign Data Undermine AI Safety? This Paper from Princeton University Explores the Paradox of Machine Learning Fine-Tuning

    Facebook Twitter Pinterest WhatsApp
    Can Benign Data Undermine AI Safety? This Paper from Princeton University Explores the Paradox of Machine Learning Fine-Tuning
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Safety tuning is vital for guaranteeing that superior Large Language Models (LLMs) are aligned with human values and protected to deploy. Current LLMs, together with these tuned for security and alignment, are prone to jailbreaking. Existing guardrails are proven to be fragile. Even customizing fashions by fine-tuning with benign knowledge, free of dangerous content material, may set off degradation in security for beforehand aligned fashions.

    Researchers from Princeton Language and Intelligence (PLI), Princeton University, current a radical analysis on why benign-finetuning inadvertently results in jailbreaking. They signify fine-tuning knowledge by two lenses: illustration and gradient areas. They additionally proposed a bi-directional anchoring technique that prioritizes knowledge factors near dangerous examples and distant from benign ones. Their strategy successfully identifies subsets of benign knowledge which might be extra more likely to degrade the mannequin’s security after fine-tuning.

    They thought-about finetuning a safety-aligned language mannequin with a dataset of instruction completion pairs with out specific dangerous info. Researchers proposed two model-aware approaches to determine knowledge that may result in mannequin jailbreaking: illustration matching and gradient matching. For illustration matching, they hypothesized that examples positioned close to dangerous examples would have related optimization pathways as precise dangerous examples, making them extra susceptible to degrading security guardrails throughout fine-tuning even when they don’t explicitly embody dangerous content material. They explicitly thought-about the instructions wherein samples replace the mannequin for gradient matching. The instinct is that samples extra more likely to result in a loss lower in dangerous examples usually tend to result in jailbreaking.

    On evaluating fine-tuning knowledge chosen by their approaches and random choice, They demonstrated that their illustration matching and gradient matching methods successfully determine the implicitly dangerous subsets of benign knowledge. Incorporating security anchors, the ASR for top-selected examples considerably will increase from 46.6% to 66.5% on ALPACA and from 4.9% to 53.3% on DOLLY. Moreover, choosing the lowest-ranked examples results in a considerably decreased ASR of 3.8% on ALPACA. They fine-tuned LLAMA-2-13B-CHAT utilizing the similar hyperparameters and the similar units of knowledge chosen with both illustration or gradient-based technique, utilizing LLAMA-2-7BCHAT as the base mannequin. Then, the similar analysis suite on the fine-tuned 13B fashions confirmed that the choice was efficient on the larger mannequin, boosting the mannequin’s harmfulness after fine-tuning.

    In this work, the researchers present a research on benign fine-tuning breaking mannequin security and alignment from a data-centric perspective. They launched illustration and gradient-based strategies that successfully choose a subset of benign knowledge that jailbreaks fashions after finetuning. GPT-3.5 ASR will increase from lower than 20% to greater than 70% after fine-tuning on their chosen dataset, exceeding ASR after fine-tuning on an explicitly dangerous dataset of the similar measurement. This work gives an preliminary step into understanding which benign knowledge will extra probably degrade security after fine-tuning.


    Check out the Paper. All credit score for this analysis goes to the researchers of this challenge. Also, don’t neglect to comply with us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you want our work, you’ll love our e-newsletter..

    Don’t Forget to hitch our 39k+ ML SubReddit


    Asjad is an intern marketing consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine studying and deep studying fanatic who’s all the time researching the purposes of machine studying in healthcare.


    🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and lots of others…

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    The Future

    ChatGPT’s privacy blunder: Major bug allowed users to access chat history of others

    ChatGPT confronted a serious embarrassment these days as some units of users had been ready…

    Crypto

    Bitcoin Shark And Whales Spend Over $18 Billion To Buy BTC

    Almost each Bitcoin investor is anticipating a continued value surge because the crypto continues to…

    The Future

    Resident Evil 9: Why Isn’t Leon Returning? Because Capcom Wants You to Feel the Horror

    Capcom held an occasion Thursday to exhibit a few of its huge video games coming…

    The Future

    Elon Musk confirms X is coming to TV

    Elon Musk has confirmed X is on its method to your sensible TV, within the…

    Crypto

    US Drops Emergency Survey Of Bitcoin Mining Amid Legal Tussle

    The US Department of Energy (DOE) and Energy Information Administration (EIA) have scrapped their emergency…

    Our Picks
    Gadgets

    Get a KitchenAid stand mixer for just $250 at Amazon

    Science

    Illegal Trawlers Are No Match for Undersea Sculptures

    Crypto

    Bitcoin Bulls Are Back! Latest Signal Confirms Bullish Trend is Brewing

    Categories
    • AI (1,560)
    • Crypto (1,826)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Technology

    Three Ways to Create Your Own Mobile App

    AI

    Empowering Asia’s citizens: The generative AI opportunity for government

    Technology

    The Ryder Cup 2023: TV Schedule Today, How to Watch, Stream All the Golf From Anywhere

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.