Close Menu
Ztoog
    What's Hot
    Crypto

    Crypto Analysts Reveal Catalyst That Will Drive Price Above $3,500

    The Future

    Apple’s October Surprise Might Be a New iMac or MacBook Pro

    Crypto

    Crypto Analyst Unveils Bullish End Of Year Predictions For Ethereum

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

    Ztoog
    Home » Automated system teaches users when to collaborate with an AI assistant | Ztoog
    AI

    Automated system teaches users when to collaborate with an AI assistant | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Automated system teaches users when to collaborate with an AI assistant | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Artificial intelligence fashions that pick patterns in pictures can usually accomplish that higher than human eyes — however not all the time. If a radiologist is utilizing an AI mannequin to assist her decide whether or not a affected person’s X-rays present indicators of pneumonia, when ought to she belief the mannequin’s recommendation and when ought to she ignore it?

    A personalized onboarding course of may assist this radiologist reply that query, in accordance to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a person when to collaborate with an AI assistant.

    In this case, the coaching methodology would possibly discover conditions the place the radiologist trusts the mannequin’s recommendation — besides she shouldn’t as a result of the mannequin is incorrect. The system mechanically learns guidelines for the way she ought to collaborate with the AI, and describes them with pure language.

    During onboarding, the radiologist practices collaborating with the AI utilizing coaching workouts primarily based on these guidelines, receiving suggestions about her efficiency and the AI’s efficiency.

    The researchers discovered that this onboarding process led to a few 5 % enchancment in accuracy when people and AI collaborated on an picture prediction activity. Their outcomes additionally present that simply telling the person when to belief the AI, with out coaching, led to worse efficiency.

    Importantly, the researchers’ system is totally automated, so it learns to create the onboarding course of primarily based on information from the human and AI performing a selected activity. It may also adapt to totally different duties, so it may be scaled up and utilized in many conditions the place people and AI fashions work collectively, akin to in social media content material moderation, writing, and programming.

    “So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use — there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,” says Hussein Mozannar, a graduate pupil within the Social and Engineering Systems doctoral program inside the Institute for Data, Systems, and Society (IDSS) and lead creator of a paper about this coaching course of.

    The researchers envision that such onboarding might be an important a part of coaching for medical professionals.

    “One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink everything from continuing medical education to the way clinical trials are designed,” says senior creator David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the chief of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    Mozannar, who can also be a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and pc science; Dennis Wei, a senior analysis scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, analysis employees members on the MIT-IBM Watson AI Lab. The paper might be introduced on the Conference on Neural Information Processing Systems.

    Training that evolves

    Existing onboarding strategies for human-AI collaboration are sometimes composed of coaching supplies produced by human specialists for particular use circumstances, making them troublesome to scale up. Some associated strategies depend on explanations, the place the AI tells the person its confidence in every choice, however analysis has proven that explanations are not often useful, Mozannar says.

    “The AI model’s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user’s perception of the model continues changing. So, we need a training procedure that also evolves over time,” he provides.

    To accomplish this, their onboarding methodology is mechanically realized from information. It is constructed from a dataset that incorporates many cases of a activity, akin to detecting the presence of a visitors gentle from a blurry picture.

    The system’s first step is to gather information on the human and AI performing this activity. In this case, the human would attempt to predict, with the assistance of AI, whether or not blurry pictures include visitors lights.

    The system embeds these information factors onto a latent area, which is a illustration of information wherein comparable information factors are nearer collectively. It makes use of an algorithm to uncover areas of this area the place the human collaborates incorrectly with the AI. These areas seize cases the place the human trusted the AI’s prediction however the prediction was incorrect, and vice versa.

    Perhaps the human mistakenly trusts the AI when pictures present a freeway at night time.

    After discovering the areas, a second algorithm makes use of a big language mannequin to describe every area as a rule, utilizing pure language. The algorithm iteratively fine-tunes that rule by discovering contrasting examples. It would possibly describe this area as “ignore AI when it is a highway during the night.”

    These guidelines are used to construct coaching workouts. The onboarding system reveals an instance to the human, on this case a blurry freeway scene at night time, in addition to the AI’s prediction, and asks the person if the picture reveals visitors lights. The person can reply sure, no, or use the AI’s prediction.

    If the human is incorrect, they’re proven the proper reply and efficiency statistics for the human and AI on these cases of the duty. The system does this for every area, and on the finish of the coaching course of, repeats the workouts the human acquired incorrect.

    “After that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,” Mozannar says.

    Onboarding boosts accuracy

    The researchers examined this system with users on two duties — detecting visitors lights in blurry pictures and answering a number of alternative questions from many domains (akin to biology, philosophy, pc science, and many others.).

    They first confirmed users a card with details about the AI mannequin, the way it was educated, and a breakdown of its efficiency on broad classes. Users had been cut up into 5 teams: Some had been solely proven the cardboard, some went via the researchers’ onboarding process, some went via a baseline onboarding process, some went via the researchers’ onboarding process and got suggestions of when they need to or mustn’t belief the AI, and others had been solely given the suggestions.

    Only the researchers’ onboarding process with out suggestions improved users’ accuracy considerably, boosting their efficiency on the visitors gentle prediction activity by about 5 % with out slowing them down. However, onboarding was not as efficient for the question-answering activity. The researchers imagine it’s because the AI mannequin, ChatGPT, supplied explanations with every reply that convey whether or not it ought to be trusted.

    But offering suggestions with out onboarding had the alternative impact — users not solely carried out worse, they took extra time to make predictions.

    “When you only give someone recommendations, it seems like they get confused and don’t know what to do. It derails their process. People also don’t like being told what to do, so that is a factor as well,” Mozannar says.

    Providing suggestions alone may hurt the person if these suggestions are incorrect, he provides. With onboarding, alternatively, the most important limitation is the quantity of accessible information. If there aren’t sufficient information, the onboarding stage gained’t be as efficient, he says.

    In the long run, he and his collaborators need to conduct bigger research to consider the short- and long-term results of onboarding. They additionally need to leverage unlabeled information for the onboarding course of, and discover strategies to successfully cut back the variety of areas with out omitting vital examples.

    “People are adopting AI systems willy-nilly, and indeed AI offers great potential, but these AI agents still sometimes makes mistakes. Thus, it’s crucial for AI developers to devise methods that help humans know when it’s safe to rely on the AI’s suggestions,” says Dan Weld, professor emeritus on the Paul G. Allen School of Computer Science and Engineering on the University of Washington, who was not concerned with this analysis. “Mozannar et al. have created an innovative method for identifying situations where the AI is trustworthy, and (importantly) to describe them to people in a way that leads to better human-AI team interactions.”

    This work is funded, partly, by the MIT-IBM Watson AI Lab.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Technology

    The Rise and Fall of 3M’s Floppy Disk

    A model of this put up initially appeared onTedium, Ernie Smith’s publication, which hunts for…

    Mobile

    Best Android tips and tricks in 2023

    Robert Triggs / Android AuthorityAndroid is by far the preferred OS in the world, and…

    The Future

    Jujutsu Kaisen Returns with Trailer for Shibuya Incident Arc

    Image: Studio MAPPA/Toho AnimationJujutsu Kaisen’s second season has been airing since July, and its first batch…

    Mobile

    Google Messages could soon feature end-to-end encryption for cross-platform messaging

    Rich Communication Services (RCS) is the messaging platform out there on the Messages by Google…

    Science

    The Running Chicken Nebula shimmers in new ESO image

    A nebula can seem like numerous totally different animals—a crab, tarantula, seagull, a cat’s eye,…

    Our Picks
    Crypto

    What Investors Need To Watch Out For

    Crypto

    Bitcoin continues climbing, Block releases hardware wallet, Robinhood expands to EU and VCs may see some relief soon

    Crypto

    What Is Lubin’s ‘Piece Of Paper’?

    Categories
    • AI (1,560)
    • Crypto (1,827)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Technology

    SpaceX comes very close to meeting goal of 100 orbital launches in a year

    Technology

    Experts say terrorist groups are using generative AI tools to evade the hashing algorithms used by tech companies to automatically remove extremist content (David Gilbert/Wired)

    The Future

    US Army tests AI chatbots as battle planners in a war game simulation

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.