Close Menu
Ztoog
    What's Hot
    Crypto

    Thorchain Dominates Cross-Chain Trading Volume: What’s Next for RUNE?

    The Future

    Beware! Scammers are now using Barbie craze to scam you of your money

    AI

    Meet Empathic Voice Interface (EVI): The First AI with Emotional Intelligence, Launching Its API for Developers in April 2024

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Supporting benchmarks for AI safety with MLCommons – Google Research Blog
    AI

    Supporting benchmarks for AI safety with MLCommons – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Supporting benchmarks for AI safety with MLCommons – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Anoop Sinha, Technology and Society, and Marian Croak, Google Research, Responsible AI and Human Centered Technology group

    Standard benchmarks are agreed upon methods of measuring necessary product qualities, and so they exist in lots of fields. Some commonplace benchmarks measure safety: for instance, when a automobile producer touts a “five-star overall safety rating,” they’re citing a benchmark. Standard benchmarks exist already in machine studying (ML) and AI applied sciences: for occasion, the MLCommons Association operates the MLPerf benchmarks that measure the velocity of innovative AI {hardware} resembling Google’s TPUs. However, although there was important work finished on AI safety, there are as but no related commonplace benchmarks for AI safety.

    We are excited to assist a brand new effort by the non-profit MLCommons Association to develop commonplace AI safety benchmarks. Developing benchmarks which can be efficient and trusted goes to require advancing AI safety testing know-how and incorporating a broad vary of views. The MLCommons effort goals to convey collectively professional researchers throughout academia and business to develop commonplace benchmarks for measuring the safety of AI techniques into scores that everybody can perceive. We encourage the entire neighborhood, from AI researchers to coverage specialists, to affix us in contributing to the trouble.

    Why AI safety benchmarks?

    Like most superior applied sciences, AI has the potential for great advantages however might additionally result in detrimental outcomes with out applicable care. For instance, AI know-how can increase human productiveness in a variety of actions (e.g., enhance well being diagnostics and analysis into ailments, analyze power utilization, and extra). However, with out enough precautions, AI is also used to assist dangerous or malicious actions and reply in biased or offensive methods.

    By offering commonplace measures of safety throughout classes resembling dangerous use, out-of-scope responses, AI-control dangers, and so on., commonplace AI safety benchmarks might assist society reap the advantages of AI whereas making certain that enough precautions are being taken to mitigate these dangers. Initially, nascent safety benchmarks might assist drive AI safety analysis and inform accountable AI growth. With time and maturity, they might assist inform customers and purchasers of AI techniques. Eventually, they could possibly be a useful instrument for coverage makers.

    In pc {hardware}, benchmarks (e.g., SPEC, TPC) have proven a tremendous potential to align analysis, engineering, and even advertising and marketing throughout a complete business in pursuit of progress, and we imagine commonplace AI safety benchmarks might assist do the identical on this important space.

    What are commonplace AI safety benchmarks?

    Academic and company analysis efforts have experimented with a spread of AI safety assessments (e.g., RealToxicityPrompts, Stanford HELM equity, bias, toxicity measurements, and Google’s guardrails for generative AI). However, most of those assessments deal with offering a immediate to an AI system and algorithmically scoring the output, which is a helpful begin however restricted to the scope of the check prompts. Further, they normally use open datasets for the prompts and responses, which can have already got been (typically inadvertently) included into coaching information.

    MLCommons proposes a multi-stakeholder course of for deciding on assessments and grouping them into subsets to measure safety for explicit AI use-cases, and translating the extremely technical outcomes of these assessments into scores that everybody can perceive. MLCommons is proposing to create a platform that brings these present assessments collectively in a single place and encourages the creation of extra rigorous assessments that transfer the cutting-edge ahead. Users will be capable of entry these assessments each by way of on-line testing the place they will generate and overview scores and offline testing with an engine for non-public testing.

    AI safety benchmarks must be a collective effort

    Responsible AI builders use a various vary of safety measures, together with computerized testing, handbook testing, pink teaming (through which human testers try to supply adversarial outcomes), software-imposed restrictions, information and mannequin best-practices, and auditing. However, figuring out that enough precautions have been taken may be difficult, particularly because the neighborhood of corporations offering AI techniques grows and diversifies. Standard AI benchmarks might present a strong instrument for serving to the neighborhood develop responsibly, each by serving to distributors and customers measure AI safety and by encouraging an ecosystem of sources and specialist suppliers centered on enhancing AI safety.

    At the identical time, growth of mature AI safety benchmarks which can be each efficient and trusted shouldn’t be attainable with out the involvement of the neighborhood. This effort will want researchers and engineers to come back collectively and supply modern but sensible enhancements to safety testing know-how that make testing each extra rigorous and extra environment friendly. Similarly, corporations might want to come collectively and supply check information, engineering assist, and monetary assist. Some points of AI safety may be subjective, and constructing trusted benchmarks supported by a broad consensus would require incorporating a number of views, together with these of public advocates, coverage makers, teachers, engineers, information employees, enterprise leaders, and entrepreneurs.

    Google’s assist for MLCommons

    Grounded in our AI Principles that have been introduced in 2018, Google is dedicated to particular practices for the secure, safe, and reliable growth and use of AI (see our 2019, 2020, 2021, 2022 updates). We’ve additionally made important progress on key commitments, which is able to assist guarantee AI is developed boldly and responsibly, for the advantage of everybody.

    Google is supporting the MLCommons Association’s efforts to develop AI safety benchmarks in a variety of methods.

    1. Testing platform: We are becoming a member of with different corporations in offering funding to assist the event of a testing platform.
    2. Technical experience and sources: We are offering technical experience and sources, such because the Monk Skin Tone Examples Dataset, to assist be certain that the benchmarks are well-designed and efficient.
    3. Datasets: We are contributing an inner dataset for multilingual representational bias, in addition to already externalized assessments for stereotyping harms, resembling SeeGULL and SPICE. Moreover, we’re sharing our datasets that target gathering human annotations responsibly and inclusively, like DICES and SRP.

    Future course

    We imagine that these benchmarks shall be very helpful for advancing analysis in AI safety and making certain that AI techniques are developed and deployed in a accountable method. AI safety is a collective-action downside. Groups just like the Frontier Model Forum and Partnership on AI are additionally main necessary standardization initiatives. We’re happy to have been a part of these teams and MLCommons since their starting. We stay up for further collective efforts to advertise the accountable growth of latest generative AI instruments.

    Acknowledgements

    Many due to the Google group that contributed to this work: Peter Mattson, Lora Aroyo, Chris Welty, Kathy Meier-Hellstern, Parker Barnes, Tulsee Doshi, Manvinder Singh, Brian Goldman, Nitesh Goyal, Alice Friend, Nicole Delange, Kerry Barker, Madeleine Elish, Shruti Sheth, Dawn Bloxwich, William Isaac, Christina Butterfield.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Fueling seamless AI at scale

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    CES 2024: TVs Get Bigger, Brighter, More Transparent

    First, Samsung dazzled attendees of its First Look occasion with its clear MicroLED TV, showcasing…

    Gadgets

    This party speaker is so big it needs wheels and a handle. Get it for just $119 right now.

    Popular Science began writing about expertise greater than 150 years in the past. There was no…

    Gadgets

    Apple releases iOS 16.7.2 and iOS 15.8 security updates to patch old hardware

    Enlarge / iPhones operating iOS 15.Apple Apple is releasing a slew of updates for its…

    Technology

    With 0-days hitting Chrome, iOS, and dozens more this month, is no software safe?

    End customers, admins, and researchers higher brace yourselves: The variety of apps being patched for…

    AI

    Machine learning unlocks secrets to advanced alloys | Ztoog

    The idea of short-range order (SRO) — the association of atoms over small distances —…

    Our Picks
    Technology

    Women In AI: Irene Solaiman, head of global policy at Hugging Face

    Gadgets

    5 Best Nanoleaf Smart Lights (2023): Shapes, 4D Kit, and Installation Tips

    Technology

    FTC: Most smart device makers are breaking the law by not informing consumers of software support terms

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,867)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    Mobile

    Here’s the iPhone 15 Pro getting disassembled on video

    Mobile

    Ask Jerry: Why are new phones so difficult to set up?

    AI

    A benchmark for the next generation of data-driven weather models – Google Research Blog

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.