Close Menu
Ztoog
    What's Hot
    Technology

    Phone Keyboard Exploits Leave 1 Billion Users Exposed

    Mobile

    Does the power lie inside? –

    Technology

    Bird loses its NYSE wings, Uber gets tight with taxis and Tesla gets sued again for racial discrimination

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » New algorithm discovers language just by watching videos | Ztoog
    AI

    New algorithm discovers language just by watching videos | Ztoog

    Facebook Twitter Pinterest WhatsApp
    New algorithm discovers language just by watching videos | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Mark Hamilton, an MIT PhD pupil in electrical engineering and pc science and affiliate of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), needs to make use of machines to grasp how animals talk. To do this, he set out first to create a system that may study human language “from scratch.”

    “Funny enough, the key moment of inspiration came from the movie ‘March of the Penguins.’ There’s a scene where a penguin falls while crossing the ice, and lets out a little belabored groan while getting up. When you watch it, it’s almost obvious that this groan is standing in for a four letter word. This was the moment where we thought, maybe we need to use audio and video to learn language,” says Hamilton. “Is there a way we could let an algorithm watch TV all day and from this figure out what we’re talking about?”

    “Our model, ‘DenseAV,’ aims to learn language by predicting what it’s seeing from what it’s hearing, and vice-versa. For example, if you hear the sound of someone saying ‘bake the cake at 350’ chances are you might be seeing a cake or an oven. To succeed at this audio-video matching game across millions of videos, the model has to learn what people are talking about,” says Hamilton.

    Once they skilled DenseAV on this matching recreation, Hamilton and his colleagues checked out which pixels the mannequin appeared for when it heard a sound. For instance, when somebody says “dog,” the algorithm instantly begins searching for canines within the video stream. By seeing which pixels are chosen by the algorithm, one can uncover what the algorithm thinks a phrase means.

    Interestingly, an analogous search course of occurs when DenseAV listens to a canine barking: It searches for a canine within the video stream. “This piqued our interest. We wanted to see if the algorithm knew the difference between the word ‘dog’ and a dog’s bark,” says Hamilton. The staff explored this by giving the DenseAV a “two-sided brain.” Interestingly, they discovered one facet of DenseAV’s mind naturally targeted on language, just like the phrase “dog,” and the opposite facet targeted on feels like barking. This confirmed that DenseAV not solely discovered the that means of phrases and the areas of sounds, but in addition discovered to differentiate between a lot of these cross-modal connections, all with out human intervention or any information of written language.

    One department of purposes is studying from the huge quantity of video revealed to the web every day: “We want systems that can learn from massive amounts of video content, such as instructional videos,” says Hamilton. “Another exciting application is understanding new languages, like dolphin or whale communication, which don’t have a written form of communication. Our hope is that DenseAV can help us understand these languages that have evaded human translation efforts since the beginning. Finally, we hope that this method can be used to discover patterns between other pairs of signals, like the seismic sounds the earth makes and its geology.” 

    A formidable problem lay forward of the staff: studying language with none textual content enter. Their goal was to rediscover the that means of language from a clean slate, avoiding utilizing pre-trained language fashions. This method is impressed by how youngsters study by observing and listening to their setting to grasp language.

    To obtain this feat, DenseAV makes use of two major parts to course of audio and visible knowledge individually. This separation made it inconceivable for the algorithm to cheat, by letting the visible facet take a look at the audio and vice versa. It pressured the algorithm to acknowledge objects and created detailed and significant options for each audio and visible alerts. DenseAV learns by evaluating pairs of audio and visible alerts to seek out which alerts match and which alerts don’t. This methodology, known as contrastive studying, doesn’t require labeled examples, and permits DenseAV to determine the vital predictive patterns of language itself.

    One main distinction between DenseAV and former algorithms is that prior works targeted on a single notion of similarity between sound and pictures. An whole audio clip like somebody saying “the dog sat on the grass” was matched  to a complete picture of a canine. This didn’t permit earlier strategies to find fine-grained particulars, just like the connection between the phrase “grass” and the grass beneath the canine. The staff’s algorithm searches for and aggregates all of the doable matches between an audio clip and a picture’s pixels. This not solely improved efficiency, however allowed the staff to exactly localize sounds in a manner that earlier algorithms couldn’t. “Conventional methods use a single class token, but our approach compares every pixel and every second of sound. This fine-grained method lets DenseAV make more detailed connections for better localization,” says Hamilton.

    The researchers skilled DenseAV on AudioSet, which incorporates 2 million YouTube videos. They additionally created new datasets to check how nicely the mannequin can hyperlink sounds and pictures. In these exams, DenseAV outperformed different prime fashions in duties like figuring out objects from their names and sounds, proving its effectiveness. “Previous datasets only supported coarse evaluations, so we created a dataset using semantic segmentation datasets. This helps with pixel-perfect annotations for precise evaluation of our model’s performance. We can prompt the algorithm with specific sounds or images and get those detailed localizations,” says Hamilton.

    Due to the huge quantity of knowledge concerned, the venture took a few 12 months to finish. The staff says that transitioning to a big transformer structure introduced challenges, as these fashions can simply overlook fine-grained particulars. Encouraging the mannequin to give attention to these particulars was a major hurdle.

    Looking forward, the staff goals to create techniques that may study from huge quantities of video- or audio-only knowledge. This is essential for brand spanking new domains the place there’s a number of both mode, however not collectively. They additionally goal to scale this up utilizing bigger backbones and presumably combine information from language fashions to enhance efficiency.

    “Recognizing and segmenting visual objects in images, as well as environmental sounds and spoken words in audio recordings, are each difficult problems in their own right. Historically researchers have relied upon expensive, human-provided annotations in order to train machine learning models to accomplish these tasks,” says David Harwath, assistant professor in pc science on the University of Texas at Austin who was not concerned within the work. “DenseAV makes significant progress towards developing methods that can learn to solve these tasks simultaneously by simply observing the world through sight and sound — based on the insight that the things we see and interact with often make sound, and we also use spoken language to talk about them. This model also makes no assumptions about the specific language that is being spoken, and could therefore in principle learn from data in any language. It would be exciting to see what DenseAV could learn by scaling it up to thousands or millions of hours of video data across a multitude of languages.”

    Additional authors on a paper describing the work are Andrew Zisserman, professor of pc imaginative and prescient engineering on the University of Oxford; John R. Hershey, Google AI Perception researcher; and William T. Freeman, MIT electrical engineering and pc science professor and CSAIL principal investigator. Their analysis was supported, partially, by the U.S. National Science Foundation, a Royal Society Research Professorship, and an EPSRC Programme Grant Visual AI. This work can be introduced on the IEEE/CVF Computer Vision and Pattern Recognition Conference this month.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Fueling seamless AI at scale

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Ex-SDNY prosecutor says Caroline Ellison, Gary Wang and Nishad Singh probably won’t get jail time

    ‘I’ve had cooperating witnesses who did get jail time, but it surely’s the exception not…

    The Future

    AI2 is developing a large language model optimized for science

    PaLM 2. GPT-4. The checklist of text-generating AI virtually grows by the day. Most of…

    AI

    Top Predictive Analytics Tools/Platforms (2023)

    Predictive analytics is a typical software that we make the most of with out a…

    Crypto

    Exchange Deposits Hit 8-Month High

    On-chain information reveals the Ethereum trade deposits have hit an 8-month excessive, an indication that…

    Crypto

    Layer3 increases initial airdrop to 7.5% of total supply ahead of token launch

    Key Takeaways Layer3’s airdrop enhance to 7.5% helps its imaginative and prescient for a scalable…

    Our Picks
    Crypto

    SettleMint’s AI assistant aims to help web3 developers write better smart contracts

    Crypto

    Bitcoin Faces Turbulence As 10YR Treasury Yield At 15-Year High

    Science

    Doughnut-shaped laser used to create an optical fibre out of air

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,867)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    Technology

    The Bachelor’s influencer pipeline leaves Black and Asian contestants out

    Science

    The Massive Campaign to Air-Drop Tiny Rabies Vaccines to Raccoons

    Gadgets

    Bird Buddy Smart Bird Feeder review: A camera that’s not just for the birds

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.