Close Menu
Ztoog
    What's Hot
    AI

    New model offers a way to speed up drug discovery | Ztoog

    Mobile

    Amazon’s Black Friday 2023 deal on the world-class Sony WF-1000XM5 buds has (likely) arrived early

    AI

    Is medicine ready for AI? Doctors, computer scientists, and policymakers are cautiously optimistic | Ztoog

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

    Ztoog
    Home » New algorithm discovers language just by watching videos | Ztoog
    AI

    New algorithm discovers language just by watching videos | Ztoog

    Facebook Twitter Pinterest WhatsApp
    New algorithm discovers language just by watching videos | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Mark Hamilton, an MIT PhD pupil in electrical engineering and pc science and affiliate of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), needs to make use of machines to grasp how animals talk. To do this, he set out first to create a system that may study human language “from scratch.”

    “Funny enough, the key moment of inspiration came from the movie ‘March of the Penguins.’ There’s a scene where a penguin falls while crossing the ice, and lets out a little belabored groan while getting up. When you watch it, it’s almost obvious that this groan is standing in for a four letter word. This was the moment where we thought, maybe we need to use audio and video to learn language,” says Hamilton. “Is there a way we could let an algorithm watch TV all day and from this figure out what we’re talking about?”

    “Our model, ‘DenseAV,’ aims to learn language by predicting what it’s seeing from what it’s hearing, and vice-versa. For example, if you hear the sound of someone saying ‘bake the cake at 350’ chances are you might be seeing a cake or an oven. To succeed at this audio-video matching game across millions of videos, the model has to learn what people are talking about,” says Hamilton.

    Once they skilled DenseAV on this matching recreation, Hamilton and his colleagues checked out which pixels the mannequin appeared for when it heard a sound. For instance, when somebody says “dog,” the algorithm instantly begins searching for canines within the video stream. By seeing which pixels are chosen by the algorithm, one can uncover what the algorithm thinks a phrase means.

    Interestingly, an analogous search course of occurs when DenseAV listens to a canine barking: It searches for a canine within the video stream. “This piqued our interest. We wanted to see if the algorithm knew the difference between the word ‘dog’ and a dog’s bark,” says Hamilton. The staff explored this by giving the DenseAV a “two-sided brain.” Interestingly, they discovered one facet of DenseAV’s mind naturally targeted on language, just like the phrase “dog,” and the opposite facet targeted on feels like barking. This confirmed that DenseAV not solely discovered the that means of phrases and the areas of sounds, but in addition discovered to differentiate between a lot of these cross-modal connections, all with out human intervention or any information of written language.

    One department of purposes is studying from the huge quantity of video revealed to the web every day: “We want systems that can learn from massive amounts of video content, such as instructional videos,” says Hamilton. “Another exciting application is understanding new languages, like dolphin or whale communication, which don’t have a written form of communication. Our hope is that DenseAV can help us understand these languages that have evaded human translation efforts since the beginning. Finally, we hope that this method can be used to discover patterns between other pairs of signals, like the seismic sounds the earth makes and its geology.” 

    A formidable problem lay forward of the staff: studying language with none textual content enter. Their goal was to rediscover the that means of language from a clean slate, avoiding utilizing pre-trained language fashions. This method is impressed by how youngsters study by observing and listening to their setting to grasp language.

    To obtain this feat, DenseAV makes use of two major parts to course of audio and visible knowledge individually. This separation made it inconceivable for the algorithm to cheat, by letting the visible facet take a look at the audio and vice versa. It pressured the algorithm to acknowledge objects and created detailed and significant options for each audio and visible alerts. DenseAV learns by evaluating pairs of audio and visible alerts to seek out which alerts match and which alerts don’t. This methodology, known as contrastive studying, doesn’t require labeled examples, and permits DenseAV to determine the vital predictive patterns of language itself.

    One main distinction between DenseAV and former algorithms is that prior works targeted on a single notion of similarity between sound and pictures. An whole audio clip like somebody saying “the dog sat on the grass” was matched  to a complete picture of a canine. This didn’t permit earlier strategies to find fine-grained particulars, just like the connection between the phrase “grass” and the grass beneath the canine. The staff’s algorithm searches for and aggregates all of the doable matches between an audio clip and a picture’s pixels. This not solely improved efficiency, however allowed the staff to exactly localize sounds in a manner that earlier algorithms couldn’t. “Conventional methods use a single class token, but our approach compares every pixel and every second of sound. This fine-grained method lets DenseAV make more detailed connections for better localization,” says Hamilton.

    The researchers skilled DenseAV on AudioSet, which incorporates 2 million YouTube videos. They additionally created new datasets to check how nicely the mannequin can hyperlink sounds and pictures. In these exams, DenseAV outperformed different prime fashions in duties like figuring out objects from their names and sounds, proving its effectiveness. “Previous datasets only supported coarse evaluations, so we created a dataset using semantic segmentation datasets. This helps with pixel-perfect annotations for precise evaluation of our model’s performance. We can prompt the algorithm with specific sounds or images and get those detailed localizations,” says Hamilton.

    Due to the huge quantity of knowledge concerned, the venture took a few 12 months to finish. The staff says that transitioning to a big transformer structure introduced challenges, as these fashions can simply overlook fine-grained particulars. Encouraging the mannequin to give attention to these particulars was a major hurdle.

    Looking forward, the staff goals to create techniques that may study from huge quantities of video- or audio-only knowledge. This is essential for brand spanking new domains the place there’s a number of both mode, however not collectively. They additionally goal to scale this up utilizing bigger backbones and presumably combine information from language fashions to enhance efficiency.

    “Recognizing and segmenting visual objects in images, as well as environmental sounds and spoken words in audio recordings, are each difficult problems in their own right. Historically researchers have relied upon expensive, human-provided annotations in order to train machine learning models to accomplish these tasks,” says David Harwath, assistant professor in pc science on the University of Texas at Austin who was not concerned within the work. “DenseAV makes significant progress towards developing methods that can learn to solve these tasks simultaneously by simply observing the world through sight and sound — based on the insight that the things we see and interact with often make sound, and we also use spoken language to talk about them. This model also makes no assumptions about the specific language that is being spoken, and could therefore in principle learn from data in any language. It would be exciting to see what DenseAV could learn by scaling it up to thousands or millions of hours of video data across a multitude of languages.”

    Additional authors on a paper describing the work are Andrew Zisserman, professor of pc imaginative and prescient engineering on the University of Oxford; John R. Hershey, Google AI Perception researcher; and William T. Freeman, MIT electrical engineering and pc science professor and CSAIL principal investigator. Their analysis was supported, partially, by the U.S. National Science Foundation, a Royal Society Research Professorship, and an EPSRC Programme Grant Visual AI. This work can be introduced on the IEEE/CVF Computer Vision and Pattern Recognition Conference this month.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Did The US SEC Just Endorse Ethereum After This Settlement?

    On May 30, the United States Securities and Exchange Commission (SEC) introduced that Ishan Wahi,…

    Crypto

    Crypto Scam Story From Morocco Reveals BTC Purchases Gone Wrong

    According to Morocco World News, a Frenchman bagged 18 months of jail time for utilizing…

    Technology

    Robinhood received ~$1.1B in account transfers since it began offering a 1% match on transferred brokerage accounts on October 23, with 150+ transfers of $1M+ (Hannah Miao/Wall Street Journal)

    Hannah Miao / Wall Street Journal: Robinhood received ~$1.1B in account transfers since it began…

    Gadgets

    The best small guitar amps of 2023

    We could earn income from the merchandise accessible on this web page and take part…

    Science

    Quantum holograms can send messages that disappear

    Polarised mild can make messages encoded in a quantum hologram disappearHong Liang, Wai Chun Wong,…

    Our Picks
    The Future

    An AI-generated ‘South Park’ episode, Microsoft’s security woes, and Tesla’s first Cybertruck build

    Science

    Why Some Animals Thrive in Cities

    Crypto

    Path To New All-Time High Set?

    Categories
    • AI (1,560)
    • Crypto (1,827)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Science

    Water seen in young planet system shows Earth may have always been wet

    Science

    The World’s Broken Food System Costs $12.7 Trillion a Year

    Mobile

    Patrick Farmer’s Top 10 of 2023

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.