Close Menu
Ztoog
    What's Hot
    Mobile

    You can get the Galaxy Watch 5 Pro at an all time low right now

    The Future

    Doctor Who’s New Streaming Home Has Been a Huge Success

    Crypto

    Analyst Cites Key Narrative As Catalyst

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

      LiberNovo Omni: The World’s First Dynamic Ergonomic Chair

    • Technology

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

      5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

      Senate advances GENIUS Act after cloture vote passes

    Ztoog
    Home » Training machines to learn more like humans do | Ztoog
    AI

    Training machines to learn more like humans do | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Training machines to learn more like humans do | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Imagine sitting on a park bench, watching somebody stroll by. While the scene could continuously change because the particular person walks, the human mind can remodel that dynamic visible info right into a more secure illustration over time. This capacity, referred to as perceptual straightening, helps us predict the strolling particular person’s trajectory.

    Unlike humans, laptop imaginative and prescient fashions don’t sometimes exhibit perceptual straightness, so that they learn to characterize visible info in a extremely unpredictable manner. But if machine-learning fashions had this capacity, it would allow them to higher estimate how objects or folks will transfer.

    MIT researchers have found {that a} particular coaching technique may help laptop imaginative and prescient fashions learn more perceptually straight representations, like humans do. Training entails exhibiting a machine-learning mannequin hundreds of thousands of examples so it could learn a job.

    The researchers discovered that coaching laptop imaginative and prescient fashions utilizing a way referred to as adversarial coaching, which makes them much less reactive to tiny errors added to photographs, improves the fashions’ perceptual straightness.

    The staff additionally found that perceptual straightness is affected by the duty one trains a mannequin to carry out. Models educated to carry out summary duties, like classifying photographs, learn more perceptually straight representations than these educated to carry out more fine-grained duties, like assigning each pixel in a picture to a class.   

    For instance, the nodes throughout the mannequin have inside activations that characterize “dog,” which permit the mannequin to detect a canine when it sees any picture of a canine. Perceptually straight representations retain a more secure “dog” illustration when there are small modifications within the picture. This makes them more sturdy.

    By gaining a greater understanding of perceptual straightness in laptop imaginative and prescient, the researchers hope to uncover insights that might assist them develop fashions that make more correct predictions. For occasion, this property would possibly enhance the security of autonomous automobiles that use laptop imaginative and prescient fashions to predict the trajectories of pedestrians, cyclists, and different automobiles.

    “One of the take-home messages here is that taking inspiration from biological systems, such as human vision, can both give you insight about why certain things work the way that they do and also inspire ideas to improve neural networks,” says Vasha DuTell, an MIT postdoc and co-author of a paper exploring perceptual straightness in laptop imaginative and prescient.

    Joining DuTell on the paper are lead writer Anne Harrington, a graduate pupil within the Department of Electrical Engineering and Computer Science (EECS); Ayush Tewari, a postdoc; Mark Hamilton, a graduate pupil; Simon Stent, analysis supervisor at Woven Planet; Ruth Rosenholtz, principal analysis scientist within the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior writer William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of CSAIL. The analysis is being offered on the International Conference on Learning Representations.

    Studying straightening

    After studying a 2019 paper from a staff of New York University researchers about perceptual straightness in humans, DuTell, Harrington, and their colleagues questioned if that property is likely to be helpful in laptop imaginative and prescient fashions, too.

    They set out to decide whether or not several types of laptop imaginative and prescient fashions straighten the visible representations they learn. They fed every mannequin frames of a video after which examined the illustration at completely different phases in its studying course of.

    If the mannequin’s illustration modifications in a predictable manner throughout the frames of the video, that mannequin is straightening. At the tip, its output illustration ought to be more secure than the enter illustration.

    “You can think of the representation as a line, which starts off really curvy. A model that straightens can take that curvy line from the video and straighten it out through its processing steps,” DuTell explains.

    Most fashions they examined didn’t straighten. Of the few that did, these which straightened most successfully had been educated for classification duties utilizing the approach referred to as adversarial coaching.

    Adversarial coaching entails subtly modifying photographs by barely altering every pixel. While a human wouldn’t discover the distinction, these minor modifications can idiot a machine so it misclassifies the picture. Adversarial coaching makes the mannequin more sturdy, so it gained’t be tricked by these manipulations.

    Because adversarial coaching teaches the mannequin to be much less reactive to slight modifications in photographs, this helps it learn a illustration that’s more predictable over time, Harrington explains.

    “People have already had this idea that adversarial training might help you get your model to be more like a human, and it was interesting to see that carry over to another property that people hadn’t tested before,” she says.

    But the researchers discovered that adversarially educated fashions solely learn to straighten when they’re educated for broad duties, like classifying total photographs into classes. Models tasked with segmentation — labeling each pixel in a picture as a sure class — didn’t straighten, even after they had been adversarially educated.

    Consistent classification

    The researchers examined these picture classification fashions by exhibiting them movies. They discovered that the fashions which realized more perceptually straight representations tended to accurately classify objects within the movies more persistently.

    “To me, it is amazing that these adversarially trained models, which have never even seen a video and have never been trained on temporal data, still show some amount of straightening,” DuTell says.

    The researchers don’t know precisely what concerning the adversarial coaching course of allows a pc imaginative and prescient mannequin to straighten, however their outcomes counsel that stronger coaching schemes trigger the fashions to straighten more, she explains.

    Building off this work, the researchers need to use what they realized to create new coaching schemes that might explicitly give a mannequin this property. They additionally need to dig deeper into adversarial coaching to perceive why this course of helps a mannequin straighten.

    “From a biological standpoint, adversarial training doesn’t necessarily make sense. It’s not how humans understand the world. There are still a lot of questions about why this training process seems to help models act more like humans,” Harrington says.

    “Understanding the representations learned by deep neural networks is critical to improve properties such as robustness and generalization,” says Bill Lotter, assistant professor on the Dana-Farber Cancer Institute and Harvard Medical School, who was not concerned with this analysis. “Harrington et al. perform an extensive evaluation of how the representations of computer vision models change over time when processing natural videos, showing that the curvature of these trajectories varies widely depending on model architecture, training properties, and task. These findings can inform the development of improved models and also offer insights into biological visual processing.”

    “The paper confirms that straightening natural videos is a fairly unique property displayed by the human visual system. Only adversarially trained networks display it, which provides an interesting connection with another signature of human perception: its robustness to various image transformations, whether natural or artificial,” says Olivier Hénaff, a analysis scientist at DeepMind, who was not concerned with this analysis. “That even adversarially trained scene segmentation models do not straighten their inputs raises important questions for future work: Do humans parse natural scenes in the same way as computer vision models? How to represent and predict the trajectories of objects in motion while remaining sensitive to their spatial detail? In connecting the straightening hypothesis with other aspects of visual behavior, the paper lays the groundwork for more unified theories of perception.”

    The analysis is funded, partly, by the Toyota Research Institute, the MIT CSAIL METEOR Fellowship, the National Science Foundation, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Mobile

    OpenAI drama continues as board in talks to bring fired CEO back

    Edgar Cervantes / Android AuthorityTL;DR OpenAI fired CEO Sam Altman on November 17. OpenAI’s board…

    Science

    Fluffy exoplanet blasted by its sun has clouds that rain sand

    Artist’s impression of fluffy planet WASP-107b and its father or mother starLUCA School of Arts,…

    AI

    New technique helps robots pack objects into a tight space | Ztoog

    Anyone who has ever tried to pack a family-sized quantity of bags into a sedan-sized…

    Technology

    Alex Murdaugh trial: South Carolina lawyer found guilty of killing his wife and son

    While probably the most gripping true crime tales take us into the darkest components of…

    Science

    Single atoms have been X-rayed for the first time

    Perhaps you consider X-rays as the unusual, calmly radioactive waves that section via your physique…

    Our Picks
    AI

    A New AI Research Releases SWIM-IR: A Large-Scale Synthetic Multilingual Retrieval Dataset with 28 Million Training Pairs over 33 Languages

    Gadgets

    Appeals court pauses ban on patent-infringing Apple Watch imports

    Science

    A New Technique Paves the Way for 3D-Printed 5G and 6G Antennas

    Categories
    • AI (1,493)
    • Crypto (1,753)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,802)
    • The Future (1,648)
    Most Popular
    Science

    Fastest star in the galaxy clocked at 2285 kilometres per second

    Gadgets

    Clink Audiobuds: Premium features at a Budget Price

    Science

    Satellite beamed power from space to Earth for the first time ever

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.