Close Menu
Ztoog
    What's Hot
    The Future

    The 34 Best Gift Baskets of 2023: Find the Perfect Edible Gift

    The Future

    Marketing Expenses: Is Advertising a Fixed Cost?

    Crypto

    Bitcoin Price Hitting A Yearly High Today? What Matters Today

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How to Get Bot Lobbies in Fortnite? (2025 Guide)

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

    • Technology

      What does a millennial midlife crisis look like?

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

    • Gadgets

      Watch Apple’s WWDC 2025 keynote right here

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

    • Mobile

      YouTube is testing a leaderboard to show off top live stream fans

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » How an archeological approach can help leverage biased data in AI to improve medicine | Ztoog
    AI

    How an archeological approach can help leverage biased data in AI to improve medicine | Ztoog

    Facebook Twitter Pinterest WhatsApp
    How an archeological approach can help leverage biased data in AI to improve medicine | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    The traditional pc science adage “garbage in, garbage out” lacks nuance when it comes to understanding biased medical data, argue pc science and bioethics professors from MIT, Johns Hopkins University, and the Alan Turing Institute in a brand new opinion piece printed in a current version of the New England Journal of Medicine (NEJM). The rising reputation of synthetic intelligence has introduced elevated scrutiny to the matter of biased AI fashions ensuing in algorithmic discrimination, which the White House Office of Science and Technology recognized as a key situation in their current Blueprint for an AI Bill of Rights. 

    When encountering biased data, significantly for AI fashions used in medical settings, the everyday response is to both gather extra data from underrepresented teams or generate artificial data making up for lacking components to be sure that the mannequin performs equally properly throughout an array of affected person populations. But the authors argue that this technical approach ought to be augmented with a sociotechnical perspective that takes each historic and present social components under consideration. By doing so, researchers can be simpler in addressing bias in public well being. 

    “The three of us had been discussing the ways in which we often treat issues with data from a machine learning perspective as irritations that need to be managed with a technical solution,” remembers co-author Marzyeh Ghassemi, an assistant professor in electrical engineering and pc science and an affiliate of the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Institute of Medical Engineering and Science (IMES). “We had used analogies of data as an artifact that gives a partial view of past practices, or a cracked mirror holding up a reflection. In both cases the information is perhaps not entirely accurate or favorable: Maybe we think that we behave in certain ways as a society — but when you actually look at the data, it tells a different story. We might not like what that story is, but once you unearth an understanding of the past you can move forward and take steps to address poor practices.” 

    Data as artifact 

    In the paper, titled “Considering Biased Data as Informative Artifacts in AI-Assisted Health Care,” Ghassemi, Kadija Ferryman, and Maxine Mackintosh make the case for viewing biased medical data as “artifacts” in the identical method anthropologists or archeologists would view bodily objects: items of civilization-revealing practices, perception techniques, and cultural values — in the case of the paper, particularly people who have led to current inequities in the well being care system. 

    For instance, a 2019 examine confirmed that an algorithm extensively thought-about to be an trade customary used health-care expenditures as an indicator of want, main to the inaccurate conclusion that sicker Black sufferers require the identical stage of care as more healthy white sufferers. What researchers discovered was algorithmic discrimination failing to account for unequal entry to care.  

    In this occasion, slightly than viewing biased datasets or lack of data as issues that solely require disposal or fixing, Ghassemi and her colleagues advocate the “artifacts” approach as a method to elevate consciousness round social and historic parts influencing how data are collected and different approaches to medical AI growth. 

    “If the goal of your model is deployment in a clinical setting, you should engage a bioethicist or a clinician with appropriate training reasonably early on in problem formulation,” says Ghassemi. “As computer scientists, we often don’t have a complete picture of the different social and historical factors that have gone into creating data that we’ll be using. We need expertise in discerning when models generalized from existing data may not work well for specific subgroups.” 

    When extra data can really hurt efficiency 

    The authors acknowledge that one of many more difficult facets of implementing an artifact-based approach is having the ability to assess whether or not data have been racially corrected: i.e., utilizing white, male our bodies as the standard customary that different our bodies are measured towards. The opinion piece cites an instance from the Chronic Kidney Disease Collaboration in 2021, which developed a brand new equation to measure kidney operate as a result of the previous equation had beforehand been “corrected” underneath the blanket assumption that Black individuals have increased muscle mass. Ghassemi says that researchers ought to be ready to examine race-based correction as a part of the analysis course of. 

    In one other current paper accepted to this yr’s International Conference on Machine Learning co-authored by Ghassemi’s PhD pupil Vinith Suriyakumar and University of California at San Diego Assistant Professor Berk Ustun, the researchers discovered that assuming the inclusion of personalised attributes like self-reported race improve the efficiency of ML fashions can really lead to worse danger scores, fashions, and metrics for minority and minoritized populations.  

    “There’s no single right solution for whether or not to include self-reported race in a clinical risk score. Self-reported race is a social construct that is both a proxy for other information, and deeply proxied itself in other medical data. The solution needs to fit the evidence,” explains Ghassemi. 

    How to transfer ahead 

    This will not be to say that biased datasets ought to be enshrined, or biased algorithms don’t require fixing — high quality coaching data continues to be key to creating secure, high-performance medical AI fashions, and the NEJM piece highlights the position of the National Institutes of Health (NIH) in driving moral practices.  

    “Generating high-quality, ethically sourced datasets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” NIH performing director Lawrence Tabak said in a press launch when the NIH introduced its $130 million Bridge2AI Program final yr. Ghassemi agrees, mentioning that the NIH has “prioritized data collection in ethical ways that cover information we have not previously emphasized the value of in human health — such as environmental factors and social determinants. I’m very excited about their prioritization of, and strong investments towards, achieving meaningful health outcomes.” 

    Elaine Nsoesie, an affiliate professor on the Boston University of Public Health, believes there are numerous potential advantages to treating biased datasets as artifacts slightly than rubbish, beginning with the deal with context. “Biases present in a dataset collected for lung cancer patients in a hospital in Uganda might be different from a dataset collected in the U.S. for the same patient population,” she explains. “In considering local context, we can train algorithms to better serve specific populations.” Nsoesie says that understanding the historic and up to date components shaping a dataset can make it simpler to determine discriminatory practices that may be coded in algorithms or techniques in methods that aren’t instantly apparent. She additionally notes that an artifact-based approach could lead on to the event of recent insurance policies and constructions making certain that the foundation causes of bias in a selected dataset are eradicated. 

    “People often tell me that they are very afraid of AI, especially in health. They’ll say, ‘I’m really scared of an AI misdiagnosing me,’ or ‘I’m concerned it will treat me poorly,’” Ghassemi says. “I tell them, you shouldn’t be scared of some hypothetical AI in health tomorrow, you should be scared of what health is right now. If we take a narrow technical view of the data we extract from systems, we could naively replicate poor practices. That’s not the only option — realizing there is a problem is our first step towards a larger opportunity.” 

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Fueling seamless AI at scale

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Mobile

    What I want to see and what we know so far

    Oliver Cragg / Android AuthorityThere’s little denying that iPads are the very best tablets round,…

    The Future

    Countdown starts for ISRO’s navigation satellite launch; lift-off at 10:42 am Monday

    The countdown for ISRO’s (Indian Space reserach Organisation) GSLV F12/NVS-01 launch mission commenced at 7:12…

    Mobile

    Verizon lets you add a second number to your existing phone for just $10 per month

    Verizon has introduced a new choice, which lets you add a second phone number to…

    Technology

    Claims of TikTok whistleblower may not add up

    The United States authorities is at present poised to outlaw TikTok. Little of the proof…

    Science

    Quantum engine could power devices with an ultracold atom cloud

    A quantum engine compresses a fuel of bosons and decompresses a fuel of fermionsMirijam Neve…

    Our Picks
    Gadgets

    Upgrade your gaming experience with this $19.99 Nintendo Switch dock

    Technology

    Researchers say X removed the ability for users to report election misinformation, a feature launched in the US, Australia, and some other countries in 2021 (Byron Kaye/Reuters)

    Gadgets

    HONOR Pad 9: Gobal Version Launched With Snapdragon 6 Gen 1 Sub $400 Price

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,806)
    • Mobile (1,852)
    • Science (1,867)
    • Technology (1,804)
    • The Future (1,650)
    Most Popular
    Gadgets

    How to Download Your Reddit Data

    Mobile

    Last chance to get a deal from Samsung Week in the US

    AI

    GPT-4o’s Chinese token-training data is polluted by spam and porn websites

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.