Close Menu
Ztoog
    What's Hot
    Science

    Cities Aren’t Supposed to Burn Like This Anymore—Especially Lahaina

    The Future

    GPT-4 gives medical advice that saves doctors’ time but can also be harmful

    The Future

    Ms. Marvel Will Air on ABC in August

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How to Get Bot Lobbies in Fortnite? (2025 Guide)

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

    • Technology

      What does a millennial midlife crisis look like?

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

    • Gadgets

      Watch Apple’s WWDC 2025 keynote right here

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

    • Mobile

      YouTube is testing a leaderboard to show off top live stream fans

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Training machines to learn more like humans do | Ztoog
    AI

    Training machines to learn more like humans do | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Training machines to learn more like humans do | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Imagine sitting on a park bench, watching somebody stroll by. While the scene could continuously change because the particular person walks, the human mind can remodel that dynamic visible info right into a more secure illustration over time. This capacity, referred to as perceptual straightening, helps us predict the strolling particular person’s trajectory.

    Unlike humans, laptop imaginative and prescient fashions don’t sometimes exhibit perceptual straightness, so that they learn to characterize visible info in a extremely unpredictable manner. But if machine-learning fashions had this capacity, it would allow them to higher estimate how objects or folks will transfer.

    MIT researchers have found {that a} particular coaching technique may help laptop imaginative and prescient fashions learn more perceptually straight representations, like humans do. Training entails exhibiting a machine-learning mannequin hundreds of thousands of examples so it could learn a job.

    The researchers discovered that coaching laptop imaginative and prescient fashions utilizing a way referred to as adversarial coaching, which makes them much less reactive to tiny errors added to photographs, improves the fashions’ perceptual straightness.

    The staff additionally found that perceptual straightness is affected by the duty one trains a mannequin to carry out. Models educated to carry out summary duties, like classifying photographs, learn more perceptually straight representations than these educated to carry out more fine-grained duties, like assigning each pixel in a picture to a class.   

    For instance, the nodes throughout the mannequin have inside activations that characterize “dog,” which permit the mannequin to detect a canine when it sees any picture of a canine. Perceptually straight representations retain a more secure “dog” illustration when there are small modifications within the picture. This makes them more sturdy.

    By gaining a greater understanding of perceptual straightness in laptop imaginative and prescient, the researchers hope to uncover insights that might assist them develop fashions that make more correct predictions. For occasion, this property would possibly enhance the security of autonomous automobiles that use laptop imaginative and prescient fashions to predict the trajectories of pedestrians, cyclists, and different automobiles.

    “One of the take-home messages here is that taking inspiration from biological systems, such as human vision, can both give you insight about why certain things work the way that they do and also inspire ideas to improve neural networks,” says Vasha DuTell, an MIT postdoc and co-author of a paper exploring perceptual straightness in laptop imaginative and prescient.

    Joining DuTell on the paper are lead writer Anne Harrington, a graduate pupil within the Department of Electrical Engineering and Computer Science (EECS); Ayush Tewari, a postdoc; Mark Hamilton, a graduate pupil; Simon Stent, analysis supervisor at Woven Planet; Ruth Rosenholtz, principal analysis scientist within the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior writer William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of CSAIL. The analysis is being offered on the International Conference on Learning Representations.

    Studying straightening

    After studying a 2019 paper from a staff of New York University researchers about perceptual straightness in humans, DuTell, Harrington, and their colleagues questioned if that property is likely to be helpful in laptop imaginative and prescient fashions, too.

    They set out to decide whether or not several types of laptop imaginative and prescient fashions straighten the visible representations they learn. They fed every mannequin frames of a video after which examined the illustration at completely different phases in its studying course of.

    If the mannequin’s illustration modifications in a predictable manner throughout the frames of the video, that mannequin is straightening. At the tip, its output illustration ought to be more secure than the enter illustration.

    “You can think of the representation as a line, which starts off really curvy. A model that straightens can take that curvy line from the video and straighten it out through its processing steps,” DuTell explains.

    Most fashions they examined didn’t straighten. Of the few that did, these which straightened most successfully had been educated for classification duties utilizing the approach referred to as adversarial coaching.

    Adversarial coaching entails subtly modifying photographs by barely altering every pixel. While a human wouldn’t discover the distinction, these minor modifications can idiot a machine so it misclassifies the picture. Adversarial coaching makes the mannequin more sturdy, so it gained’t be tricked by these manipulations.

    Because adversarial coaching teaches the mannequin to be much less reactive to slight modifications in photographs, this helps it learn a illustration that’s more predictable over time, Harrington explains.

    “People have already had this idea that adversarial training might help you get your model to be more like a human, and it was interesting to see that carry over to another property that people hadn’t tested before,” she says.

    But the researchers discovered that adversarially educated fashions solely learn to straighten when they’re educated for broad duties, like classifying total photographs into classes. Models tasked with segmentation — labeling each pixel in a picture as a sure class — didn’t straighten, even after they had been adversarially educated.

    Consistent classification

    The researchers examined these picture classification fashions by exhibiting them movies. They discovered that the fashions which realized more perceptually straight representations tended to accurately classify objects within the movies more persistently.

    “To me, it is amazing that these adversarially trained models, which have never even seen a video and have never been trained on temporal data, still show some amount of straightening,” DuTell says.

    The researchers don’t know precisely what concerning the adversarial coaching course of allows a pc imaginative and prescient mannequin to straighten, however their outcomes counsel that stronger coaching schemes trigger the fashions to straighten more, she explains.

    Building off this work, the researchers need to use what they realized to create new coaching schemes that might explicitly give a mannequin this property. They additionally need to dig deeper into adversarial coaching to perceive why this course of helps a mannequin straighten.

    “From a biological standpoint, adversarial training doesn’t necessarily make sense. It’s not how humans understand the world. There are still a lot of questions about why this training process seems to help models act more like humans,” Harrington says.

    “Understanding the representations learned by deep neural networks is critical to improve properties such as robustness and generalization,” says Bill Lotter, assistant professor on the Dana-Farber Cancer Institute and Harvard Medical School, who was not concerned with this analysis. “Harrington et al. perform an extensive evaluation of how the representations of computer vision models change over time when processing natural videos, showing that the curvature of these trajectories varies widely depending on model architecture, training properties, and task. These findings can inform the development of improved models and also offer insights into biological visual processing.”

    “The paper confirms that straightening natural videos is a fairly unique property displayed by the human visual system. Only adversarially trained networks display it, which provides an interesting connection with another signature of human perception: its robustness to various image transformations, whether natural or artificial,” says Olivier Hénaff, a analysis scientist at DeepMind, who was not concerned with this analysis. “That even adversarially trained scene segmentation models do not straighten their inputs raises important questions for future work: Do humans parse natural scenes in the same way as computer vision models? How to represent and predict the trajectories of objects in motion while remaining sensitive to their spatial detail? In connecting the straightening hypothesis with other aspects of visual behavior, the paper lays the groundwork for more unified theories of perception.”

    The analysis is funded, partly, by the Toyota Research Institute, the MIT CSAIL METEOR Fellowship, the National Science Foundation, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Fueling seamless AI at scale

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Sam Bankman-Fried Wants 10 Charges Dropped, Not Guilty on All Counts

    Key Takeaways Sam Bankman-Fried of FTX desires 10 counts of prison prices dropped due redundant…

    Crypto

    Worldcoin fails to get injunction against Spain’s privacy suspension

    Controversial eyeball scanning startup Worldcoin has failed to get an injunction against a brief suspension…

    Science

    DeepMind AI rivals the world’s smartest high schoolers at geometry

    Enlarge / Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI…

    The Future

    OpenAI CEO Sam Altman lands new job at Microsoft after surprise firing

    Sam Altman was fired as CEO of OpenAI on 17 NovemberAbaca Press / Alamy Sam…

    Gadgets

    Prime Video cuts Dolby Vision, Atmos support from ad tier—and didn’t tell subs

    Enlarge / The Rings of Power… now in HDR10+ for ad-tier customers. On January 29,…

    Our Picks
    The Future

    Unlock Donghua Jinlong’s food grade glycine

    AI

    Exploring New Frontiers in AI: Google DeepMind’s Research on Advancing Machine Learning with ReSTEM Self-Training Beyond Human-Generated Data

    Science

    NASA is developing a Mars helicopter that could land itself from orbit

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,806)
    • Mobile (1,852)
    • Science (1,867)
    • Technology (1,804)
    • The Future (1,650)
    Most Popular
    Crypto

    Ethereum Open Interest Rises By $1.5 Billion, What This Means?

    Science

    Every homeopathic eye drop should be pulled off the market, FDA says

    The Future

    The future of AI: The 5 possible scenarios, from utopia to extinction

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.