Close Menu
Ztoog
    What's Hot
    The Future

    Best Yoga Poses for Better Sleep

    Gadgets

    13 Best Mobile Game Controllers (2023): iPhone or Android

    Science

    Triassic dinos had perfect necks for decapitation

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      OPPO launches A5 Pro 5G: Premium features at a budget price

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

    • Technology

      What It Is and Why It Matters—Part 1 – O’Reilly

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Nothing is stronger than quantum connections – and now we know why

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

    • AI

      Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

    • Crypto

      Ethereum Breaks Key Resistance In One Massive Move – Higher High Confirms Momentum

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

    Ztoog
    Home » Training machines to learn more like humans do | Ztoog
    AI

    Training machines to learn more like humans do | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Training machines to learn more like humans do | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Imagine sitting on a park bench, watching somebody stroll by. While the scene could continuously change because the particular person walks, the human mind can remodel that dynamic visible info right into a more secure illustration over time. This capacity, referred to as perceptual straightening, helps us predict the strolling particular person’s trajectory.

    Unlike humans, laptop imaginative and prescient fashions don’t sometimes exhibit perceptual straightness, so that they learn to characterize visible info in a extremely unpredictable manner. But if machine-learning fashions had this capacity, it would allow them to higher estimate how objects or folks will transfer.

    MIT researchers have found {that a} particular coaching technique may help laptop imaginative and prescient fashions learn more perceptually straight representations, like humans do. Training entails exhibiting a machine-learning mannequin hundreds of thousands of examples so it could learn a job.

    The researchers discovered that coaching laptop imaginative and prescient fashions utilizing a way referred to as adversarial coaching, which makes them much less reactive to tiny errors added to photographs, improves the fashions’ perceptual straightness.

    The staff additionally found that perceptual straightness is affected by the duty one trains a mannequin to carry out. Models educated to carry out summary duties, like classifying photographs, learn more perceptually straight representations than these educated to carry out more fine-grained duties, like assigning each pixel in a picture to a class.   

    For instance, the nodes throughout the mannequin have inside activations that characterize “dog,” which permit the mannequin to detect a canine when it sees any picture of a canine. Perceptually straight representations retain a more secure “dog” illustration when there are small modifications within the picture. This makes them more sturdy.

    By gaining a greater understanding of perceptual straightness in laptop imaginative and prescient, the researchers hope to uncover insights that might assist them develop fashions that make more correct predictions. For occasion, this property would possibly enhance the security of autonomous automobiles that use laptop imaginative and prescient fashions to predict the trajectories of pedestrians, cyclists, and different automobiles.

    “One of the take-home messages here is that taking inspiration from biological systems, such as human vision, can both give you insight about why certain things work the way that they do and also inspire ideas to improve neural networks,” says Vasha DuTell, an MIT postdoc and co-author of a paper exploring perceptual straightness in laptop imaginative and prescient.

    Joining DuTell on the paper are lead writer Anne Harrington, a graduate pupil within the Department of Electrical Engineering and Computer Science (EECS); Ayush Tewari, a postdoc; Mark Hamilton, a graduate pupil; Simon Stent, analysis supervisor at Woven Planet; Ruth Rosenholtz, principal analysis scientist within the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior writer William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of CSAIL. The analysis is being offered on the International Conference on Learning Representations.

    Studying straightening

    After studying a 2019 paper from a staff of New York University researchers about perceptual straightness in humans, DuTell, Harrington, and their colleagues questioned if that property is likely to be helpful in laptop imaginative and prescient fashions, too.

    They set out to decide whether or not several types of laptop imaginative and prescient fashions straighten the visible representations they learn. They fed every mannequin frames of a video after which examined the illustration at completely different phases in its studying course of.

    If the mannequin’s illustration modifications in a predictable manner throughout the frames of the video, that mannequin is straightening. At the tip, its output illustration ought to be more secure than the enter illustration.

    “You can think of the representation as a line, which starts off really curvy. A model that straightens can take that curvy line from the video and straighten it out through its processing steps,” DuTell explains.

    Most fashions they examined didn’t straighten. Of the few that did, these which straightened most successfully had been educated for classification duties utilizing the approach referred to as adversarial coaching.

    Adversarial coaching entails subtly modifying photographs by barely altering every pixel. While a human wouldn’t discover the distinction, these minor modifications can idiot a machine so it misclassifies the picture. Adversarial coaching makes the mannequin more sturdy, so it gained’t be tricked by these manipulations.

    Because adversarial coaching teaches the mannequin to be much less reactive to slight modifications in photographs, this helps it learn a illustration that’s more predictable over time, Harrington explains.

    “People have already had this idea that adversarial training might help you get your model to be more like a human, and it was interesting to see that carry over to another property that people hadn’t tested before,” she says.

    But the researchers discovered that adversarially educated fashions solely learn to straighten when they’re educated for broad duties, like classifying total photographs into classes. Models tasked with segmentation — labeling each pixel in a picture as a sure class — didn’t straighten, even after they had been adversarially educated.

    Consistent classification

    The researchers examined these picture classification fashions by exhibiting them movies. They discovered that the fashions which realized more perceptually straight representations tended to accurately classify objects within the movies more persistently.

    “To me, it is amazing that these adversarially trained models, which have never even seen a video and have never been trained on temporal data, still show some amount of straightening,” DuTell says.

    The researchers don’t know precisely what concerning the adversarial coaching course of allows a pc imaginative and prescient mannequin to straighten, however their outcomes counsel that stronger coaching schemes trigger the fashions to straighten more, she explains.

    Building off this work, the researchers need to use what they realized to create new coaching schemes that might explicitly give a mannequin this property. They additionally need to dig deeper into adversarial coaching to perceive why this course of helps a mannequin straighten.

    “From a biological standpoint, adversarial training doesn’t necessarily make sense. It’s not how humans understand the world. There are still a lot of questions about why this training process seems to help models act more like humans,” Harrington says.

    “Understanding the representations learned by deep neural networks is critical to improve properties such as robustness and generalization,” says Bill Lotter, assistant professor on the Dana-Farber Cancer Institute and Harvard Medical School, who was not concerned with this analysis. “Harrington et al. perform an extensive evaluation of how the representations of computer vision models change over time when processing natural videos, showing that the curvature of these trajectories varies widely depending on model architecture, training properties, and task. These findings can inform the development of improved models and also offer insights into biological visual processing.”

    “The paper confirms that straightening natural videos is a fairly unique property displayed by the human visual system. Only adversarially trained networks display it, which provides an interesting connection with another signature of human perception: its robustness to various image transformations, whether natural or artificial,” says Olivier Hénaff, a analysis scientist at DeepMind, who was not concerned with this analysis. “That even adversarially trained scene segmentation models do not straighten their inputs raises important questions for future work: Do humans parse natural scenes in the same way as computer vision models? How to represent and predict the trajectories of objects in motion while remaining sensitive to their spatial detail? In connecting the straightening hypothesis with other aspects of visual behavior, the paper lays the groundwork for more unified theories of perception.”

    The analysis is funded, partly, by the Toyota Research Institute, the MIT CSAIL METEOR Fellowship, the National Science Foundation, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Does the SEC Have a Favorite Spot Bitcoin ETF? Grayscale Questions SEC

    Share this text Grayscale Investments is urging the U.S. SEC to forestall what it phrases…

    Crypto

    Ethereum Bears Set Their Sights On Next Downside Target If $1,700 Support Breaks

    The value of Ethereum is presently on a downtrend and is approaching a big assist…

    Science

    Coffee: Unevenly packed grounds to blame for weak espresso, say mathematicians

    The flavour of your espresso might depend upon how evenly packed the espresso grounds are…

    The Future

    Best cheap phone: five smartphones for under $500

    Some of us take a sort of “I eat to live” fairly than an “I…

    AI

    Experience the Magic of Stable Audio by Stability AI: Where Text Prompts Become Stereo Soundscapes!

    In the quickly evolving area of audio synthesis, a brand new frontier has been crossed…

    Our Picks
    Gadgets

    11 Great Deals From Samsung’s Fall Sale: Galaxy Z Flip5, Galaxy Tab S9, and More

    Science

    Why Dumping Seawater on Blazes Isn’t the Answer to California’s Wildfire Problem

    Science

    Rocket Report: Vulcan stacked for launch; Starship aces test ahead of third flight

    Categories
    • AI (1,483)
    • Crypto (1,745)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,854)
    • Technology (1,790)
    • The Future (1,636)
    Most Popular
    Gadgets

    Best Registries for Weddings and Baby Showers (2023): Advice and Tips

    Technology

    Lies of P limited-time Steam demo includes first two chapters of the upcoming Soulslike

    Crypto

    Crypto In Hollywood: BTC, DOGE, and SHIB Accepted For Tickets To Taylor Swift’s New Movie

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.