Close Menu
Ztoog
    What's Hot
    AI

    Achieving Balance in Lifelong Learning: The WISE Memory Approach

    Science

    Can animals give birth to twins?

    Crypto

    Crypto Platform Which Predicted Bitcoin To Reach $50,000 Has Released A New Target

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      OPPO launches A5 Pro 5G: Premium features at a budget price

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

    • Technology

      What It Is and Why It Matters—Part 1 – O’Reilly

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Nothing is stronger than quantum connections – and now we know why

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

    • AI

      Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

    • Crypto

      Ethereum Breaks Key Resistance In One Massive Move – Higher High Confirms Momentum

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

    Ztoog
    Home » Pause AI? – O’Reilly
    Technology

    Pause AI? – O’Reilly

    Facebook Twitter Pinterest WhatsApp
    Pause AI? – O’Reilly
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    It’s onerous to disregard the dialogue across the Open Letter arguing for a pause within the growth of superior AI techniques. Are they harmful? Will they destroy humanity? Will they condemn all however just a few of us to boring, impoverished lives? If these are certainly the risks we face, pausing AI growth for six months is definitely a weak and ineffective preventive.

    It’s simpler to disregard the voices arguing for the accountable use of AI. Using AI responsibly requires AI to be clear, honest, and the place attainable, explainable. Using AI means auditing the outputs of AI techniques to make sure that they’re honest; it means documenting the behaviors of AI fashions and coaching information units in order that customers understand how the info was collected and what biases are inherent in that information. It means monitoring techniques after they’re deployed, updating and tuning them as wanted as a result of any mannequin will finally develop “stale” and begin performing badly. It means designing techniques that increase and liberate human capabilities, somewhat than changing them. It means understanding that people are accountable for the outcomes of AI techniques; “that’s what the computer did” doesn’t minimize it.



    Learn quicker. Dig deeper. See farther.

    The most typical approach to have a look at this hole is to border it across the distinction between present and long-term issues. That’s definitely appropriate; the “pause” letter comes from the “Future of Life Institute,” which is rather more involved about establishing colonies on Mars or turning the planet right into a pile of paper clips than it’s with redlining in actual property or setting bail in legal instances.

    But there’s a extra essential approach to have a look at the issue, and that’s to appreciate that we already know remedy most of these long-term points. Those options all focus on taking note of the short-term problems with justice and equity. AI techniques which can be designed to include human values aren’t going to doom people to unfulfilling lives in favor of a machine. They aren’t going to marginalize human thought or initiative. AI techniques that incorporate human values aren’t going to resolve to show the world into paper clips; frankly, I can’t think about any “intelligent” system figuring out that was a good suggestion. They may refuse to design weapons for organic warfare. And, ought to we ever be capable to get people to Mars, they are going to assist us construct colonies which can be honest and simply, not colonies dominated by a rich kleptocracy, like those described in so a lot of Ursula Leguin’s novels.

    Another a part of the answer is to take accountability and redress significantly. When a mannequin makes a mistake, there needs to be some type of human accountability. When somebody is jailed on the premise of incorrect face recognition, there must be a speedy course of for detecting the error, releasing the sufferer, correcting their legal file, and making use of applicable penalties to these accountable for the mannequin. These penalties must be giant sufficient that they will’t be written off as the price of doing enterprise. How is that totally different from a human who makes an incorrect ID? A human isn’t bought to a police division by a for-profit firm. “The computer said so” isn’t an satisfactory response–and if recognizing that implies that it isn’t economical to develop some sorts of purposes can’t be developed, then maybe these purposes shouldn’t be developed. I’m horrified by articles reporting that police use face detection techniques with false constructive charges over 90%; and though these reviews are 5 years outdated, I take little consolation within the risk that the cutting-edge has improved. I take even much less consolation within the propensity of the people accountable for these techniques to defend their use, even within the face of astounding error charges.

    Avoiding bias, prejudice, and hate speech is one other vital aim that may be addressed now. But this aim received’t be achieved by by some means purging coaching information of bias; the outcome can be techniques that make selections on information that doesn’t replicate any actuality. We want to acknowledge that each our actuality and our historical past are flawed and biased. It might be much more helpful to make use of AI to detect and proper bias, to coach it to make honest selections within the face of biased information, and to audit its outcomes. Such a system would have to be clear, in order that people can audit and consider its outcomes. Its coaching information and its design should each be properly documented and out there to the general public. Datasheets for Datasets and Model Cards for Model Reporting, by Timnit Gebru, Margaret Mitchell, and others, are a place to begin–however solely a place to begin. We should go a lot farther to precisely doc a mannequin’s habits.

    Building unbiased techniques within the face of prejudiced and biased information will solely be attainable if ladies and minorities of many varieties, who’re so usually excluded from software program growth initiatives, take part. But constructing unbiased techniques is just a begin. People additionally must work on countermeasures towards AI techniques which can be designed to assault human rights, and on imagining new sorts of know-how and infrastructure to help human well-being. Both of those initiatives, countermeasures, and new infrastructures, will nearly definitely contain designing and constructing new sorts of AI techniques.

    I’m suspicious of a rush to regulation, no matter which facet argues for it. I don’t oppose regulation in precept. But you need to be very cautious what you want for. Looking on the legislative our bodies within the US, I see little or no risk that regulation would lead to something constructive. At one of the best, we’d get meaningless grandstanding. The worst is all too doubtless: we’d get legal guidelines and laws that institute performative cruelty towards ladies, racial and ethnic minorities, and LBGTQ folks. Do we need to see AI techniques that aren’t allowed to debate slavery as a result of it offends White folks? That type of regulation is already impacting many faculty districts, and it’s naive to suppose that it received’t affect AI.

    I’m additionally suspicious of the motives behind the “Pause” letter. Is it to provide sure dangerous actors time to construct an “anti-woke” AI that’s a playground for misogyny and different types of hatred? Is it an try to whip up hysteria that diverts consideration from fundamental problems with justice and equity? Is it, as danah boyd argues, that tech leaders are afraid that they are going to develop into the brand new underclass, topic to the AI overlords they created?

    I can’t reply these questions, although I concern the results of an “AI Pause” can be worse than the potential of illness. As danah writes, “obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.” Or, as Brian Behlendorf writes about AI leaders cautioning us to concern AI1:

    Being Cassandra is enjoyable and might result in clicks …. But if they really really feel remorse? Among different issues they will do, they will make a donation to, assist promote, volunteer for, or write code for:

    A “Pause” received’t do something besides assist dangerous actors to catch up or get forward. There is just one technique to construct an AI that we will dwell with in some unspecified long-term future, and that’s to construct an AI that’s honest and simply at this time: an AI that offers with actual issues and damages which can be incurred by actual folks, not imagined ones.


    Footnotes

    1. Private electronic mail

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    Technology

    What It Is and Why It Matters—Part 1 – O’Reilly

    Technology

    Ensure Hard Work Is Recognized With These 3 Steps

    Technology

    Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

    Technology

    Is Duolingo the face of an AI jobs crisis?

    Technology

    The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

    Technology

    The more Google kills Fitbit, the more I want a Fitbit Sense 3

    Technology

    Sorry Shoppers, Amazon Says Tariff Cost Feature ‘Is Not Going to Happen’

    Technology

    Vibe Coding, Vibe Checking, and Vibe Blogging – O’Reilly

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Technology

    Best Satellite Internet Providers of 2023

    HughesNet – Best satellite tv for pc web supplier for dependable speeds Prices from $50…

    AI

    Stratospheric safety standards: How aviation could steer regulation of AI in health | Ztoog

    What is the probability of dying in a aircraft crash? According to a 2022 report…

    Technology

    Apple says iPhone 15 overheating issues will be addressed in an upcoming iOS update

    Recap:(*15*) Apple’s iPhone 15 lineup has been one of many hottest tech gadgets out there…

    Gadgets

    35 Best Memorial Day Sales and Deals: Pizza Ovens, Recycled Bags, and More

    As with many different gross sales occasions, Memorial Day weekend offers could be complicated. It…

    Mobile

    When it comes to RMG apps, Google and developers are the house and the house never loses

    Google posted on the Android Developers Blog (through AndroidPolice) Thursday that real-money gaming apps (RMG)…

    Our Picks
    Science

    Watch this rocket ‘eat’ its own body for fuel

    Technology

    Bridging the AI Learning Gap – O’Reilly

    Mobile

    Next week, T-Mobile subscribers can win Delta Air Lines’ gift cards worth as much as $25,000

    Categories
    • AI (1,483)
    • Crypto (1,745)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,854)
    • Technology (1,790)
    • The Future (1,636)
    Most Popular
    The Future

    Microsoft eyes closing its giant Activision Blizzard deal next week

    Mobile

    Motorola Edge 50 Pro leaks

    Mobile

    Don’t wait for Black Friday: Best Buy just slashed $130 off one of my favorite Chromebooks

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.