Close Menu
Ztoog
    What's Hot
    Technology

    The US SEC says the January 9 hack of its X account was via a SIM swap attack to reset its password; it had disabled 2FA in July 2023 over account access issues (MacKenzie Sigalos/CNBC)

    Mobile

    Realme 12 Pro Plus 5G review

    Gadgets

    Score a 1-year Sam’s Club membership for $20 this Christmas Eve

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

      LiberNovo Omni: The World’s First Dynamic Ergonomic Chair

    • Technology

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

      5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

      Senate advances GENIUS Act after cloture vote passes

    Ztoog
    Home » A Roadmap for Regulating AI Programs
    Technology

    A Roadmap for Regulating AI Programs

    Facebook Twitter Pinterest WhatsApp
    A Roadmap for Regulating AI Programs
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Globally, policymakers are debating governance approaches to control automated programs, particularly in response to rising anxiousness about unethical use of generative AI applied sciences reminiscent of
    ChatGPT and DALL-E. Legislators and regulators are understandably involved with balancing the necessity to restrict probably the most severe penalties of AI programs with out stifling innovation with onerous authorities rules. Fortunately, there isn’t any want to start out from scratch and reinvent the wheel.

    As defined within the IEEE-USA article “
    How Should We Regulate AI?,” the IEEE 1012 Standard for System, Software, and Hardware Verification and Validation already affords a street map for focusing regulation and different danger administration actions.

    Introduced in 1988, IEEE 1012 has an extended historical past of sensible use in vital environments. The normal applies to all software program and {hardware} programs together with these primarily based on rising generative AI applied sciences. IEEE 1012 is used to confirm and validate many vital programs together with medical instruments, the U.S.
    Department of Defense’s weapons programs, and NASA’s manned house autos.

    In discussions of AI danger administration and regulation, many approaches are being thought of. Some are primarily based on particular applied sciences or software areas, whereas others think about the scale of the corporate or its consumer base. There are approaches that both embody low-risk programs in the identical class as high-risk programs or depart gaps the place rules wouldn’t apply. Thus, it’s comprehensible why a rising variety of proposals for authorities regulation of AI programs are creating confusion.

    Determining danger ranges

    IEEE 1012 focuses danger administration assets on the programs with probably the most danger, no matter different components. It does so by figuring out danger as a operate of each the severity of penalties and their chance of occurring, after which it assigns probably the most intense ranges of danger administration to the highest-risk programs. The normal can distinguish, for instance, between a facial recognition system used to unlock a cellphone (the place the worst consequence could be comparatively mild) and a facial recognition system used to determine suspects in a legal justice software (the place the worst consequence might be extreme).

    IEEE 1012 presents a particular set of actions for the verification and validation (V&V) of any system, software program, or {hardware}. The normal maps 4 ranges of chance (affordable, possible, occasional, rare) and the 4 ranges of consequence (catastrophic, vital, marginal, negligible) to a set of 4 integrity ranges (see Table 1). The depth and depth of the actions varies primarily based on how the system falls alongside a spread of integrity ranges (from 1 to 4). Systems at integrity stage 1 have the bottom dangers with the lightest V&V. Systems at integrity stage 4 may have catastrophic penalties and warrant substantial danger administration all through the lifetime of the system. Policymakers can observe an identical course of to focus on regulatory necessities to AI purposes with probably the most danger.

    Table 1: IEEE 1012 Standard’s Map of Integrity Levels Onto a Combination of Consequence and Likelihood Levels

    Likelihood of incidence of an working state that contributes to the error (reducing order of chance)

    Error consequence

    Reasonable

    Probable

    Occasional

    Infrequent

    Catastrophic

    4

    4

    4 or 3

    3

    Critical

    4

    4 or 3

    3

    2 or 1

    Marginal

    3

    3 or 2

    2 or 1

    1

    Negligible

    2

    2 or 1

    1

    1

    As one would possibly count on, the best integrity stage, 4, seems within the upper-left nook of the desk, similar to excessive consequence and excessive chance. Similarly, the bottom integrity stage, 1, seems within the lower-right nook. IEEE 1012 contains some overlaps between the integrity ranges to permit for particular person interpretations of acceptable danger, relying on the applying. For instance, the cell similar to occasional chance of catastrophic penalties can map onto integrity stage 3 or 4.

    Policymakers can customise any facet of the matrix proven in Table 1. Most considerably, they may change the required actions assigned to every danger tier. IEEE 1012 focuses particularly on V&V actions.

    Policymakers can and will think about together with a few of these for danger administration functions, however policymakers even have a wider vary of attainable intervention options out there to them, together with schooling; necessities for disclosure, documentation, and oversight; prohibitions; and penalties.

    “The standard offers both wise guidance and practical strategies for policymakers seeking to navigate confusing debates about how to regulate new AI systems.”

    When contemplating the actions to assign to every integrity stage, one commonsense place to start is by assigning actions to the best integrity stage the place there may be probably the most danger after which continuing to cut back the depth of these actions as acceptable for decrease ranges. Policymakers ought to ask themselves whether or not voluntary compliance with danger administration finest practices such because the
    NIST AI Risk Management Framework is enough for the best danger programs. If not, they may specify a tier of required motion for the best danger programs, as recognized by the consequence ranges and likelihood ranges mentioned earlier. They can specify such necessities for the best tier of programs and not using a concern that they are going to inadvertently introduce boundaries for all AI programs, even low-risk inner programs.

    That is an effective way to stability concern for public welfare and administration of extreme dangers with the will to not stifle innovation.

    A time-tested course of

    IEEE 1012 acknowledges that managing danger successfully means requiring motion all through the life cycle of the system, not merely specializing in the ultimate operation of a deployed system. Similarly, policymakers needn’t be restricted to inserting necessities on the ultimate deployment of a system. They can require actions all through your entire strategy of contemplating, growing, and deploying a system.

    IEEE 1012 additionally acknowledges that impartial evaluate is essential to the reliability and integrity of outcomes and the administration of danger. When the builders of a system are the identical individuals who consider its integrity and security, they’ve issue considering out of the field about issues that stay. They even have a vested curiosity in a optimistic consequence. A confirmed manner to enhance outcomes is to require impartial evaluate of danger administration actions.

    IEEE 1012 additional tackles the query of what actually constitutes impartial evaluate, defining three essential elements: technical independence, managerial independence, and monetary independence.

    IEEE 1012 is a time-tested, broadly accepted, and universally relevant course of for guaranteeing that the correct product is appropriately constructed for its meant use. The normal affords each sensible steering and sensible methods for policymakers in search of to navigate complicated debates about methods to regulate new AI programs. IEEE 1012 might be adopted as is for V&V of software program programs, together with the brand new programs primarily based on rising generative AI applied sciences. The normal can also function a high-level framework, permitting policymakers to change the small print of consequence ranges, chance ranges, integrity ranges, and necessities to higher swimsuit their very own regulatory intent.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    Technology

    A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

    Technology

    Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

    Technology

    Apple iPhone exports from China to the US fall 76% as India output surges

    Technology

    Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    Technology

    5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    Technology

    How To Come Back After A Layoff

    Technology

    Are Democrats fumbling a golden opportunity?

    Technology

    Crypto elite increasingly worried about their personal safety

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Is Celestia (TIA) Crashing Because Of Large-Scale Dumping By Manipulators?

    Celestia, the modular blockchain that claims to be scalable with out sacrificing safety, launched in…

    AI

    Scalable spherical CNNs for scientific applications – Google Research Blog

    Posted by Carlos Esteves and Ameesh Makadia, Research Scientists, Google Research, Athena Team

    AI

    Can Benign Data Undermine AI Safety? This Paper from Princeton University Explores the Paradox of Machine Learning Fine-Tuning

    Safety tuning is vital for guaranteeing that superior Large Language Models (LLMs) are aligned with…

    Crypto

    Ethereum Breaks Back Above $3,000, Will FOMO Lead To Top Again?

    Ethereum has as soon as once more damaged above the $3,000 stage after earlier makes…

    Technology

    How quick fixes and old code in systems compound technical debt and raise hacking risks, requiring an estimated $1.52T to fix and costing the US $2.41T per year (Christopher Mims/Wall Street Journal)

    Christopher Mims / Wall Street Journal: How quick fixes and old code in systems compound…

    Our Picks
    Technology

    Ukraine’s War of Drones Runs Into an Obstacle: China

    Gadgets

    The best wireless security cameras in 2023

    Technology

    Automattic launches an AI writing assistant for WordPress

    Categories
    • AI (1,493)
    • Crypto (1,753)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,802)
    • The Future (1,648)
    Most Popular
    Science

    ‘This century is special’: Astronomer Royal Martin Rees on the vast span of time

    The Future

    The Shining’s Colorado Hotel Will Host Blumhouse’s New Horror Exhibit

    Science

    Why the Peregrine moon lander was burned up in Earth’s atmosphere

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.