Close Menu
Ztoog
    What's Hot
    Crypto

    A ‘Decades-Long’ Investment, CEO Says, Despite Recent Downturn

    Gadgets

    Swatch x Blancpain Scuba Fifty Fathom: price, availability, specs

    Mobile

    The Beeper app brings together all your messaging apps, but is that good?

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

      Bitcoin Trades Below ETF Cost-Basis As MVRV Signals Mounting Pressure

    Ztoog
    Home » How superintelligent AI could rob us of agency, free will, and meaning
    Technology

    How superintelligent AI could rob us of agency, free will, and meaning

    Facebook Twitter Pinterest WhatsApp
    How superintelligent AI could rob us of agency, free will, and meaning
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Almost 2,000 years earlier than ChatGPT was invented, two males had a debate that may educate us loads about AI’s future. Their names have been Eliezer and Yoshua.

    No, I’m not speaking about Eliezer Yudkowsky, who lately printed a bestselling e book claiming that AI goes to kill everybody, or Yoshua Bengio, the “godfather of AI” and most cited residing scientist on the planet — although I did focus on the two,000-year-old debate with each of them. I’m speaking about Rabbi Eliezer and Rabbi Yoshua, two historic sages from the primary century.

    According to a well-known story within the Talmud, the central textual content of Jewish regulation, Rabbi Eliezer was adamant that he was proper a couple of sure authorized query, however the different sages disagreed. So Rabbi Eliezer carried out a bunch of miraculous feats meant to show that God was on his facet. He made a carob tree uproot itself and scurry away. He made a stream run backward. He made the partitions of the research corridor start to collapse. Finally, he declared: If I’m proper, a voice from the heavens will show it!

    What have you learnt? A heavenly voice got here booming right down to announce that Rabbi Eliezer was proper. Still, the sages have been unimpressed. Rabbi Yoshua insisted: “The Torah is not in heaven!” In different phrases, in terms of the regulation, it doesn’t matter what any divine voice says — solely what people resolve. Since a majority of sages disagreed with Rabbi Eliezer, he was overruled.

    • Experts speak about aligning AI with human values. But “solving alignment” doesn’t imply a lot if it yields AI that results in the loss of human company.
    • True alignment would require grappling not simply with technical issues, however with a significant philosophical drawback: Having the company to make decisions is a giant half of how we create meaning, so constructing an AI that decides the whole lot for us could rob us of the meaning of life.
    • Philosopher of faith John Hicks spoke about “epistemic distance,” the concept God deliberately stays out of human affairs to a level, in order that we may be free to develop our personal company. Perhaps the identical ought to maintain true for an AI.

    Fast-forward 2,000 years and we’re having basically the identical debate — simply substitute “divine voice” with “AI god.”

    Today, the AI business’s largest gamers aren’t simply making an attempt to construct a useful chatbot, however a “superintelligence” that’s vastly smarter than people and unimaginably highly effective. This shifts the goalposts from constructing a helpful software to constructing a god. When OpenAI CEO Sam Altman says he’s making “magic intelligence in the sky,” he doesn’t simply bear in mind ChatGPT as we all know it at this time; he envisions “nearly-limitless intelligence” that may obtain “the discovery of all of physics” and then some. Some AI researchers hypothesize that superintelligence would find yourself making main selections for people — both performing autonomously or via people that really feel compelled to defer to its superior judgment.

    As we work towards superintelligence, AI firms acknowledge, we’ll want to unravel the “alignment problem” — get AI methods to reliably do what people actually need them to do, or align them with human values. But their dedication to fixing that drawback occludes a much bigger situation.

    Yes, we wish firms to cease AIs from performing in dangerous, biased, or deceitful methods. But treating alignment as a technical drawback isn’t sufficient, particularly because the business’s ambition shifts to constructing a god. That ambition requires us to ask: Even if we can in some way construct an all-knowing, supremely highly effective machine, and even when we can in some way align it with ethical values in order that it’s additionally deeply good… ought to we? Or is it only a dangerous concept to construct an AI god — irrespective of how completely aligned it’s on the technical stage — as a result of it will squeeze out house for human selection and thus render human life meaningless?

    I requested Eliezer Yudkowsky and Yoshua Bengio whether or not they agree with their historic namesakes. But earlier than I inform you whether or not they suppose an AI god is fascinating, we have to speak about a extra fundamental query: Is it even attainable?

    Can you align superintelligent AI with human values?

    God is meant to be good — everybody is aware of that. But how will we make an AI good? That, no person is aware of.

    Early makes an attempt at fixing the alignment drawback have been painfully simplistic. Companies like OpenAI and Anthropic tried to make their chatbots useful and innocent, however didn’t flesh out precisely what that’s speculated to seem like. Is it “helpful” or “harmful” for a chatbot to, say, interact in countless hours of romantic roleplay with a person? To facilitate dishonest on schoolwork? To supply free, however doubtful, remedy and moral recommendation?

    Most AI engineers usually are not educated in ethical philosophy, and they didn’t perceive how little they understood it. So they gave their chatbots solely essentially the most superficial sense of ethics — and quickly, issues abounded, from bias and discrimination to tragic suicides.

    But the reality is, there’s nobody clear understanding of the great, even amongst consultants in ethics. Morality is notoriously contested: Philosophers have give you many alternative ethical theories, and regardless of arguing over them for millennia, there’s nonetheless no consensus about which (if any) is the “right” one.

    Even if all of humanity magically agreed on the identical ethical principle, we’d nonetheless be caught with an issue, as a result of our view of what’s ethical shifts over time, and generally it’s really good to interrupt the foundations. For instance, we usually suppose it’s proper to observe society’s legal guidelines, however when Rosa Parks illegally refused to surrender her bus seat to a white passenger in 1955, it helped provoke the civil rights motion — and we contemplate her motion admirable. Context issues.

    Plus, generally completely different varieties of ethical good battle with one another on a basic stage. Think of a girl who faces a trade-off: She desires to develop into a nun but in addition desires to develop into a mom. What’s the higher choice? We can’t say, as a result of the choices are incommensurable. There’s no single yardstick by which to measure them so we will’t examine them to search out out which is bigger.

    “Probably we are creating an AI that will systematically fall silent. But that’s what we want.”

    Thankfully, some AI researchers are realizing that they’ve to offer AIs a extra advanced, pluralistic image of ethics — one which acknowledges that people have many values and our values are sometimes in rigidity with one another.

    Some of essentially the most subtle work on that is popping out of the Meaning Alignment Institute, which researches align AI with what individuals worth. When I requested co-lead Joe Edelman if he thinks aligning superintelligent AI with human values is feasible, he didn’t hesitate.

    “Yes,” he answered. But he added that an vital half of that’s coaching the AI to say “I don’t know” in sure circumstances.

    “If you’re allowed to train the AI to do that, things get much easier, because in contentious situations, or situations of real moral confusion, you don’t have to have an answer,” Edelman mentioned.

    He cited the up to date thinker Ruth Chang, who has written about “hard choices” — decisions which are genuinely arduous as a result of no best choice exists, just like the case of the girl who desires to develop into a nun but in addition desires to develop into a mom. When you face competing, incomparable items like these, you’ll be able to’t “discover” which one is objectively greatest — you simply have to decide on which one you need to put your human company behind.

    “If you get [the AI] to know which are the hard choices, then you’ve taught it something about morality,” Edelman mentioned. “So, that counts as alignment, right?”

    Well, to a level. It’s undoubtedly higher than an AI that doesn’t perceive there are decisions the place no best choice exists. But so many of a very powerful ethical decisions contain values which are on a par. If we create a carve-out for these decisions, are we actually fixing alignment in any significant sense? Or are we simply creating an AI that can systematically fall silent on all of the vital stuff?

    “Probably we are creating an AI that will systematically fall silent,” Chang mentioned once I put the query to her immediately. “It’ll say ‘Red flag, red flag, it’s a hard choice — humans, you’ve got to have input!’ But that’s what we want.” The different risk — empowering an AI to do loads of our most vital decision-making for us — strikes her as “a terrible idea.”

    Contrast that with Yudkowsky. He’s the arch-doomer of the AI world, and he has most likely by no means been accused of being too optimistic. Yet he’s really surprisingly optimistic about alignment: He believes that aligning a superintelligence is attainable in precept. He thinks it’s an engineering drawback we presently do not know remedy — however he nonetheless thinks that, at backside, it’s simply an engineering drawback. And as soon as we remedy it, we must always put the superintelligence to broad use.

    In his e book, co-written with Nate Soares, he argues that we must be “augmenting humans to make them smarter” to allow them to work out a greater paradigm for constructing AI, one that will enable for true alignment. I requested him what he thinks would occur if we acquired sufficient super-smart and super-good individuals in a room and tasked them with constructing an aligned superintelligence.

    “Probably we all live happily ever after,” Yudkowsky mentioned.

    In his ideally suited world, we might ask the individuals with augmented intelligence to not program their very own values into an AI, however to construct what Yudkowsky calls “coherent extrapolated volition” — an AI that may peer into each residing human’s thoughts and extrapolate what we might need carried out if we knew the whole lot the AI knew. (How would this work? Yudkowsky writes that the superintelligence could have “a complete readout of your brain-state” — which sounds an terrible lot like hand-wavy magic.) It would then use this data to principally run society for us.

    I requested him if he’d be comfy with this superintelligence making selections with main ethical penalties, like whether or not to drop a bomb. “I think I’m broadly okay with it,” Yudkowsky mentioned, “if 80 percent of humanity would be 80 percent coherent with respect to what they would want if they knew everything the superintelligence knew.” In different phrases, if most of us are in favor of some motion and we’re in favor of it pretty strongly and constantly, then the AI ought to try this motion.

    A serious drawback with that, nonetheless, is that it could result in a “tyranny of the majority,” the place completely legit minority views get squeezed out. That’s already a priority in fashionable democracies (although we’ve developed mechanisms that partially deal with it, like embedding basic rights in constitutions that majorities can’t simply override).

    But an AI god would crank up the “tyranny of the majority” concern to the max, as a result of it will probably be making selections for your entire world inhabitants, forevermore.

    That’s the image of the long run offered by influential thinker Nick Bostrom, who was himself pulling on a bigger set of concepts from the transhumanist custom. In his bestselling 2014 e book, Superintelligence, he imagined “a machine superintelligence that will shape all of humanity’s future.” It could do the whole lot from managing the financial system to reshaping world politics to initiating an ongoing course of of house colonization. Bostrom argued there can be benefits and disadvantages to that setup, however one obvious situation is that the superintelligence could decide the form of all human lives all over the place, and could get pleasure from a everlasting focus of energy. If you didn’t like its selections, you’ll haven’t any recourse, no escape. There can be nowhere left to run.

    Obviously, if we construct a system that’s virtually omniscient and all-powerful and it runs our civilization, that will pose an unprecedented risk to human autonomy. Which forces us to ask…

    Yudkowsky grew up within the Orthodox Jewish world, so I figured he may know the Talmud story about Rabbi Eliezer and Rabbi Yoshua. And, certain sufficient, he remembered it completely as quickly as I introduced it up.

    I famous that the purpose of the story is that even when you’ve acquired essentially the most “aligned” superintelligent adviser ever — a literal voice from God! — you shouldn’t do no matter it tells you.

    But Yudkowsky, true to his historic namesake, made it clear that he desires a superintelligent AI. Once we work out construct it safely, he thinks we must always completely construct it, as a result of it may possibly assist humanity resettle in one other photo voltaic system earlier than our solar dies and destroys our planet.

    “There’s literally nothing else our species can bet on in terms of how we eventually end up colonizing the galaxies,” he informed me.

    Did he not fear concerning the level of the story — that preserving house for human company is an important worth, one we shouldn’t be prepared to sacrifice? He did, a bit. But he advised that if a superintelligent AI could decide, utilizing coherent extrapolated volition, {that a} majority of us would desire a sure lab in North Korea blown up, then it ought to go forward and destroy the lab — maybe with out informing us in any respect. “Maybe the moral and ethical thing for a superintelligence to do is…to be the silent divine intervention so that none of us are faced with the choice of whether or not to listen to the whispers of this voice that knows better than us,” he mentioned.

    But not everybody desires an AI deciding for us handle our world. In reality, over 130,000 main researchers and public figures lately signed a petition calling for a prohibition on the event of superintelligent AI. The American public is broadly in opposition to it, too. According to polling from the Future of Life Institute (FLI), 64 % really feel that it shouldn’t be developed till it’s confirmed protected and controllable, or ought to by no means be developed. Previous polling has proven {that a} majority of voters need regulation to actively stop superintelligent AI.

    “Imagining an AI that figures everything out for us is like robbing us of the meaning of life.”

    They fear about what could occur if the AI is misaligned (worst-case situation: human extinction) however additionally they fear about what could occur even when the technical alignment drawback is solved: militaries creating unprecedented surveillance and autonomous weapons; mass focus of wealth and energy within the fingers of a number of firms; mass unemployment; and the gradual substitute of human decision-making in all vital areas.

    As FLI’s government director Anthony Aguirre put it to me, even when you’re not fearful about AI presenting an existential danger, “there’s still an existentialist risk.” In different phrases, there’s nonetheless a danger to our id as meaning-makers.

    Chang, the thinker who says it’s exactly via making arduous decisions that we develop into who we’re, informed me she’d by no means need to outsource the majority of decision-making to AI, even whether it is aligned. “All our skills and our sensitivity to values about what’s important will atrophy, because you’ve just got these machines doing it all,” she mentioned. “We definitely don’t want that.”

    Beyond the danger of atrophy, Edelman additionally sees a broader danger. “I feel like we’re all on Earth to kind of figure things out,” he mentioned. “So imagining an AI that figures everything out for us is like robbing us of the meaning of life.”

    It turned out that is an overriding concern for Yoshua Bengio, too. When I informed him the Talmud story and requested him if he agreed along with his namesake, he mentioned, “Yeah, pretty much! Even if we had a god-like intelligence, it should not be the one deciding for us what we want.”

    He added, “Human choices, human preferences, human values are not the result of just reason. It’s the result of our emotions, empathy, compassion. It is not an external truth. It is our truth. And so, even if there was a god-like intelligence, it could not decide for us what we want.”

    I requested: What if we could construct Yudkowsky’s “coherent extrapolated volition” into the AI?

    Bengio shook his head. “I’m not willing to let go of that sovereignty,” he insisted. “It’s my human free will.”

    His phrases jogged my memory of the English thinker of faith John Hick, who developed the notion of “epistemic distance.” The concept is that God deliberately stays out of human affairs to a sure diploma, as a result of in any other case we people wouldn’t be capable to develop our personal company and ethical character.

    It’s an concept that sits nicely with the tip of the Talmud story. Years after the massive debate between Rabbi Eliezer and Rabbi Yoshua, we’re informed, somebody requested the Prophet Elijah how God reacted in that second when Rabbi Yoshua refused to take heed to the divine voice. Was God livid?

    Just the alternative, the prophet defined: “The Holy One smiled and said: My children have triumphed over me; my children have triumphed over me.”

    You’ve learn 1 article within the final month

    Here at Vox, we’re unwavering in our dedication to masking the problems that matter most to you — threats to democracy, immigration, reproductive rights, the atmosphere, and the rising polarization throughout this nation.

    Our mission is to supply clear, accessible journalism that empowers you to remain knowledgeable and engaged in shaping our world. By turning into a Vox Member, you immediately strengthen our means to ship in-depth, impartial reporting that drives significant change.

    We depend on readers such as you — be part of us.

    Swati Sharma

    Vox Editor-in-Chief

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    Technology

    Iran war: How could it end?

    Technology

    Democratic senators question CFTC staffing cuts in Chicago enforcement office

    Technology

    Google’s Cloud AI lead on the three frontiers of model capability

    Technology

    AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

    Technology

    Productivity apps failed me when I needed them most

    Technology

    Makers are turning discarded vapes into tiny musical instruments

    Technology

    Best 85-Inch TV for 2026

    Technology

    Breaking Boundaries in Wireless Communication: Simulating Animated, On-Body RF Propagation

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    AI

    How to opt out of Meta’s AI training

    Internet knowledge scraping is one of the largest fights in AI proper now. Tech firms…

    AI

    5 AI Model Architectures Every AI Engineer Should Know

    Everyone talks about LLMs—however in the present day’s AI ecosystem is much greater than simply…

    AI

    Can large language models identify and correct their mistakes? – Google Research Blog

    Posted by Gladys Tyen, Intern, Google Research

    Mobile

    TSMC to test its 2nm process node this year with limited trial production

    The Apple iPhone 15 Pro and iPhone 15 Pro Max, anticipated to be launched in…

    Science

    Metal detector find may rewrite history of 7th century helmet

    Small ornamental particulars on an iconic helmet belonging to “Britain’s Tutankhamen” might revise our understanding…

    Our Picks
    The Future

    ‘Dune’ Studio Legendary May Acquire ‘John Wick’ Studio Lionsgate

    Gadgets

    JPMorgan’s AI Cash Flow Tool Slashes Human Work By 90%

    Crypto

    320 Million USDT Inflow Could Ignite Price Surge

    Categories
    • AI (1,560)
    • Crypto (1,826)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Mobile

    Xiaomi Watch S3 in for review

    Technology

    Here are the best Verizon iPhone 15 launch day deals!

    Gadgets

    Best Labor Day Deals (2024): TVs, AirPods Max, and More

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.