Close Menu
Ztoog
    What's Hot
    AI

    Google’s new version of Gemini can handle far bigger amounts of data

    Crypto

    Shiba Inu Faces Potential 12% Crash As Bearish Pattern Emerges

    Mobile

    Hands-on with the Clicks Creator Keyboard: Is the Blackberry back-berry?

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Pause AI? – O’Reilly
    Technology

    Pause AI? – O’Reilly

    Facebook Twitter Pinterest WhatsApp
    Pause AI? – O’Reilly
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    It’s onerous to disregard the dialogue across the Open Letter arguing for a pause within the growth of superior AI techniques. Are they harmful? Will they destroy humanity? Will they condemn all however just a few of us to boring, impoverished lives? If these are certainly the risks we face, pausing AI growth for six months is definitely a weak and ineffective preventive.

    It’s simpler to disregard the voices arguing for the accountable use of AI. Using AI responsibly requires AI to be clear, honest, and the place attainable, explainable. Using AI means auditing the outputs of AI techniques to make sure that they’re honest; it means documenting the behaviors of AI fashions and coaching information units in order that customers understand how the info was collected and what biases are inherent in that information. It means monitoring techniques after they’re deployed, updating and tuning them as wanted as a result of any mannequin will finally develop “stale” and begin performing badly. It means designing techniques that increase and liberate human capabilities, somewhat than changing them. It means understanding that people are accountable for the outcomes of AI techniques; “that’s what the computer did” doesn’t minimize it.



    Learn quicker. Dig deeper. See farther.

    The most typical approach to have a look at this hole is to border it across the distinction between present and long-term issues. That’s definitely appropriate; the “pause” letter comes from the “Future of Life Institute,” which is rather more involved about establishing colonies on Mars or turning the planet right into a pile of paper clips than it’s with redlining in actual property or setting bail in legal instances.

    But there’s a extra essential approach to have a look at the issue, and that’s to appreciate that we already know remedy most of these long-term points. Those options all focus on taking note of the short-term problems with justice and equity. AI techniques which can be designed to include human values aren’t going to doom people to unfulfilling lives in favor of a machine. They aren’t going to marginalize human thought or initiative. AI techniques that incorporate human values aren’t going to resolve to show the world into paper clips; frankly, I can’t think about any “intelligent” system figuring out that was a good suggestion. They may refuse to design weapons for organic warfare. And, ought to we ever be capable to get people to Mars, they are going to assist us construct colonies which can be honest and simply, not colonies dominated by a rich kleptocracy, like those described in so a lot of Ursula Leguin’s novels.

    Another a part of the answer is to take accountability and redress significantly. When a mannequin makes a mistake, there needs to be some type of human accountability. When somebody is jailed on the premise of incorrect face recognition, there must be a speedy course of for detecting the error, releasing the sufferer, correcting their legal file, and making use of applicable penalties to these accountable for the mannequin. These penalties must be giant sufficient that they will’t be written off as the price of doing enterprise. How is that totally different from a human who makes an incorrect ID? A human isn’t bought to a police division by a for-profit firm. “The computer said so” isn’t an satisfactory response–and if recognizing that implies that it isn’t economical to develop some sorts of purposes can’t be developed, then maybe these purposes shouldn’t be developed. I’m horrified by articles reporting that police use face detection techniques with false constructive charges over 90%; and though these reviews are 5 years outdated, I take little consolation within the risk that the cutting-edge has improved. I take even much less consolation within the propensity of the people accountable for these techniques to defend their use, even within the face of astounding error charges.

    Avoiding bias, prejudice, and hate speech is one other vital aim that may be addressed now. But this aim received’t be achieved by by some means purging coaching information of bias; the outcome can be techniques that make selections on information that doesn’t replicate any actuality. We want to acknowledge that each our actuality and our historical past are flawed and biased. It might be much more helpful to make use of AI to detect and proper bias, to coach it to make honest selections within the face of biased information, and to audit its outcomes. Such a system would have to be clear, in order that people can audit and consider its outcomes. Its coaching information and its design should each be properly documented and out there to the general public. Datasheets for Datasets and Model Cards for Model Reporting, by Timnit Gebru, Margaret Mitchell, and others, are a place to begin–however solely a place to begin. We should go a lot farther to precisely doc a mannequin’s habits.

    Building unbiased techniques within the face of prejudiced and biased information will solely be attainable if ladies and minorities of many varieties, who’re so usually excluded from software program growth initiatives, take part. But constructing unbiased techniques is just a begin. People additionally must work on countermeasures towards AI techniques which can be designed to assault human rights, and on imagining new sorts of know-how and infrastructure to help human well-being. Both of those initiatives, countermeasures, and new infrastructures, will nearly definitely contain designing and constructing new sorts of AI techniques.

    I’m suspicious of a rush to regulation, no matter which facet argues for it. I don’t oppose regulation in precept. But you need to be very cautious what you want for. Looking on the legislative our bodies within the US, I see little or no risk that regulation would lead to something constructive. At one of the best, we’d get meaningless grandstanding. The worst is all too doubtless: we’d get legal guidelines and laws that institute performative cruelty towards ladies, racial and ethnic minorities, and LBGTQ folks. Do we need to see AI techniques that aren’t allowed to debate slavery as a result of it offends White folks? That type of regulation is already impacting many faculty districts, and it’s naive to suppose that it received’t affect AI.

    I’m additionally suspicious of the motives behind the “Pause” letter. Is it to provide sure dangerous actors time to construct an “anti-woke” AI that’s a playground for misogyny and different types of hatred? Is it an try to whip up hysteria that diverts consideration from fundamental problems with justice and equity? Is it, as danah boyd argues, that tech leaders are afraid that they are going to develop into the brand new underclass, topic to the AI overlords they created?

    I can’t reply these questions, although I concern the results of an “AI Pause” can be worse than the potential of illness. As danah writes, “obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.” Or, as Brian Behlendorf writes about AI leaders cautioning us to concern AI1:

    Being Cassandra is enjoyable and might result in clicks …. But if they really really feel remorse? Among different issues they will do, they will make a donation to, assist promote, volunteer for, or write code for:

    A “Pause” received’t do something besides assist dangerous actors to catch up or get forward. There is just one technique to construct an AI that we will dwell with in some unspecified long-term future, and that’s to construct an AI that’s honest and simply at this time: an AI that offers with actual issues and damages which can be incurred by actual folks, not imagined ones.


    Footnotes

    1. Private electronic mail

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    Technology

    Elon Musk tries to stick to spaceships

    Technology

    A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

    Technology

    Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

    Technology

    Apple iPhone exports from China to the US fall 76% as India output surges

    Technology

    Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    Technology

    5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    Technology

    How To Come Back After A Layoff

    Technology

    Are Democrats fumbling a golden opportunity?

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    AI

    AI builds momentum for smarter health care

    The pharmaceutical business operates below one of many highest failure charges of any enterprise sector.…

    AI

    How AI is changing gymnastics judging 

    Kaia Tanskanen in her blue leotard competes on uneven bars in the course of the…

    Mobile

    What is a 0.5 selfie and how can you create one?

    Hadlee Simons / Android AuthorityAre you following the most recent traits on social media? A…

    Science

    Want to have your genes tested? It might be genetic

    People who enroll in genetic research are genetically predisposed to accomplish that. According to the…

    Gadgets

    Report: Apple changes film strategy, will rarely do wide theatrical releases

    Enlarge / A nonetheless from Wolfs, an Apple-produced film starring George Clooney and Brad Pitt.Apple…

    Our Picks
    Technology

    Best Meal Delivery Deals for Cyber Monday: ButcherBox, Blue Apron, Fresh N Lean and Others

    Technology

    Tips for Improving Workplace Communication Skills

    AI

    A new way to build neural networks could make AI more understandable

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,867)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    Mobile

    The high-end Xiaomi 12T is discounted by 40% on Amazon UK and is just irresistible

    Gadgets

    Escape the monthly billing cycle and enjoy lifetime access to MS Office 2021—$49.99 (reg. $219)

    Crypto

    XRP 6-Year Trendline Draws To A Close, 1,500% Rally Seen

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.