Close Menu
Ztoog
    What's Hot
    Crypto

    Steve Cohen-backed NFT Platform Recur to Close Doors After Raising $50M Just 2 Years Ago

    Mobile

    Google Photos’ upcoming Magic Editor is cool, but I don’t like it

    Science

    3D Printing with Sound Is Making Waves

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

      LiberNovo Omni: The World’s First Dynamic Ergonomic Chair

    • Technology

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

      5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

      Senate advances GENIUS Act after cloture vote passes

    Ztoog
    Home » A fast and flexible approach to help doctors annotate medical scans | Ztoog
    AI

    A fast and flexible approach to help doctors annotate medical scans | Ztoog

    Facebook Twitter Pinterest WhatsApp
    A fast and flexible approach to help doctors annotate medical scans | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    To the untrained eye, a medical picture like an MRI or X-ray seems to be a murky assortment of black-and-white blobs. It could be a wrestle to decipher the place one construction (like a tumor) ends and one other begins. 

    When skilled to perceive the boundaries of organic buildings, AI programs can phase (or delineate) areas of curiosity that doctors and biomedical staff need to monitor for ailments and different abnormalities. Instead of dropping valuable time tracing anatomy by hand throughout many pictures, a man-made assistant might do this for them.

    The catch? Researchers and clinicians should label numerous pictures to practice their AI system earlier than it will probably precisely phase. For instance, you’d want to annotate the cerebral cortex in quite a few MRI scans to practice a supervised mannequin to perceive how the cortex’s form can fluctuate in numerous brains.

    Sidestepping such tedious knowledge assortment, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have developed the interactive “ScribblePrompt” framework: a flexible instrument that may help quickly phase any medical picture, even sorts it hasn’t seen earlier than. 

    Instead of getting people mark up every image manually, the crew simulated how customers would annotate over 50,000 scans, together with MRIs, ultrasounds, and pictures, throughout buildings within the eyes, cells, brains, bones, pores and skin, and extra. To label all these scans, the crew used algorithms to simulate how people would scribble and click on on completely different areas in medical pictures. In addition to generally labeled areas, the crew additionally used superpixel algorithms, which discover elements of the picture with comparable values, to determine potential new areas of curiosity to medical researchers and practice ScribblePrompt to phase them. This artificial knowledge ready ScribblePrompt to deal with real-world segmentation requests from customers.

    “AI has significant potential in analyzing images and other high-dimensional data to help humans do things more productively,” says MIT PhD scholar Hallee Wong SM ’22, the lead writer on a brand new paper about ScribblePrompt and a CSAIL affiliate. “We want to augment, not replace, the efforts of medical workers through an interactive system. ScribblePrompt is a simple model with the efficiency to help doctors focus on the more interesting parts of their analysis. It’s faster and more accurate than comparable interactive segmentation methods, reducing annotation time by 28 percent compared to Meta’s Segment Anything Model (SAM) framework, for example.”

    ScribblePrompt’s interface is easy: Users can scribble throughout the tough space they’d like segmented, or click on on it, and the instrument will spotlight your entire construction or background as requested. For instance, you’ll be able to click on on particular person veins inside a retinal (eye) scan. ScribblePrompt may also mark up a construction given a bounding field.

    Then, the instrument could make corrections based mostly on the consumer’s suggestions. If you wished to spotlight a kidney in an ultrasound, you could possibly use a bounding field, and then scribble in extra elements of the construction if ScribblePrompt missed any edges. If you wished to edit your phase, you could possibly use a “negative scribble” to exclude sure areas.

    These self-correcting, interactive capabilities made ScribblePrompt the popular instrument amongst neuroimaging researchers at MGH in a consumer research. 93.8 % of those customers favored the MIT approach over the SAM baseline in enhancing its segments in response to scribble corrections. As for click-based edits, 87.5 % of the medical researchers most well-liked ScribblePrompt.

    ScribblePrompt was skilled on simulated scribbles and clicks on 54,000 pictures throughout 65 datasets, that includes scans of the eyes, thorax, backbone, cells, pores and skin, belly muscle groups, neck, mind, bones, tooth, and lesions. The mannequin familiarized itself with 16 sorts of medical pictures, together with microscopies, CT scans, X-rays, MRIs, ultrasounds, and pictures.

    “Many existing methods don’t respond well when users scribble across images because it’s hard to simulate such interactions in training. For ScribblePrompt, we were able to force our model to pay attention to different inputs using our synthetic segmentation tasks,” says Wong. “We wanted to train what’s essentially a foundation model on a lot of diverse data so it would generalize to new types of images and tasks.”

    After taking in a lot knowledge, the crew evaluated ScribblePrompt throughout 12 new datasets. Although it hadn’t seen these pictures earlier than, it outperformed 4 present strategies by segmenting extra effectively and giving extra correct predictions in regards to the actual areas customers wished highlighted.

    “​​Segmentation is the most prevalent biomedical image analysis task, performed widely both in routine clinical practice and in research — which leads to it being both very diverse and a crucial, impactful step,” says senior writer Adrian Dalca SM ’12, PhD ’16, CSAIL analysis scientist and assistant professor at MGH and Harvard Medical School. “ScribblePrompt was carefully designed to be practically useful to clinicians and researchers, and hence to substantially make this step much, much faster.”

    “The majority of segmentation algorithms that have been developed in image analysis and machine learning are at least to some extent based on our ability to manually annotate images,” says Harvard Medical School professor in radiology and MGH neuroscientist Bruce Fischl, who was not concerned within the paper. “The problem is dramatically worse in medical imaging in which our ‘images’ are typically 3D volumes, as human beings have no evolutionary or phenomenological reason to have any competency in annotating 3D images. ScribblePrompt enables manual annotation to be carried out much, much faster and more accurately, by training a network on precisely the types of interactions a human would typically have with an image while manually annotating. The result is an intuitive interface that allows annotators to naturally interact with imaging data with far greater productivity than was previously possible.”

    Wong and Dalca wrote the paper with two different CSAIL associates: John Guttag, the Dugald C. Jackson Professor of EECS at MIT and CSAIL principal investigator; and MIT PhD scholar Marianne Rakic SM ’22. Their work was supported, partially, by Quanta Computer Inc., the Eric and Wendy Schmidt Center on the Broad Institute, the Wistron Corp., and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, with {hardware} help from the Massachusetts Life Sciences Center.

    Wong and her colleagues’ work will likely be offered on the 2024 European Conference on Computer Vision and was offered as an oral speak on the DCAMI workshop on the Computer Vision and Pattern Recognition Conference earlier this yr. They have been awarded the Bench-to-Bedside Paper Award on the workshop for ScribblePrompt’s potential scientific influence.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    The best electric garage heaters of 2023

    We might earn income from the merchandise accessible on this web page and take part…

    Mobile

    vivo Y27s launched with Snapdragon 680 and IP54 rating

    The newest entry in vivo’s Y-series is right here with the vivo Y27s. The gadget…

    AI

    This AI Paper from Google DeepMind Introduces Enhanced Learning Capabilities with Many-Shot In-Context Learning

    In-context studying (ICL) in giant language fashions (LLMs) makes use of input-output examples to adapt…

    AI

    This AI Paper from China Proposes a Novel Architecture Named-ViTAR (Vision Transformer with Any Resolution)

    The exceptional strides made by the Transformer structure in Natural Language Processing (NLP) have ignited…

    Crypto

    800,000 ETH Flow Out Of Centralized Exchanges In 2024

    The worth of Ethereum has been a pleasure to look at for the reason that…

    Our Picks
    The Future

    Pixel 7A Deals: Score One for Free & Other Trade-in Offers to Take Advantage of Right Now

    AI

    Larger language models do in-context learning differently – Ztoog

    AI

    How To Train Your LLM Efficiently? Best Practices for Small-Scale Implementation

    Categories
    • AI (1,493)
    • Crypto (1,753)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,802)
    • The Future (1,648)
    Most Popular
    Science

    Japan’s SLIM lunar lander stuck the landing—upside down

    Technology

    It’s time for VCs to break up with fast fashion

    The Future

    10 Ways Crypto Can Benefit Small Businesses

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.