Close Menu
Ztoog
    What's Hot
    Mobile

    The best Galaxy Tab S9 deal known to man is back for a little while

    AI

    NousResearch Released Nous-Hermes-2-Mixtral-8x7B: An Open-Source LLM with SFT and DPO Versions

    AI

    Driven to driverless | Ztoog

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How to Get Bot Lobbies in Fortnite? (2025 Guide)

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

    • Technology

      What does a millennial midlife crisis look like?

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

    • Gadgets

      Watch Apple’s WWDC 2025 keynote right here

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

    • Mobile

      YouTube is testing a leaderboard to show off top live stream fans

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

    • Science

      Some parts of Trump’s proposed budget for NASA are literally draconian

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Teaching AI to communicate sounds like humans do | Ztoog
    AI

    Teaching AI to communicate sounds like humans do | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Teaching AI to communicate sounds like humans do | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Whether you’re describing the sound of your defective automotive engine or meowing like your neighbor’s cat, imitating sounds along with your voice could be a useful means to relay an idea when phrases don’t do the trick.

    Vocal imitation is the sonic equal of doodling a fast image to communicate one thing you noticed — besides that as an alternative of utilizing a pencil to illustrate a picture, you utilize your vocal tract to specific a sound. This might sound tough, nevertheless it’s one thing all of us do intuitively: To expertise it for your self, attempt utilizing your voice to mirror the sound of an ambulance siren, a crow, or a bell being struck.

    Inspired by the cognitive science of how we communicate, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have developed an AI system that may produce human-like vocal imitations with no coaching, and with out ever having “heard” a human vocal impression earlier than.

    To obtain this, the researchers engineered their system to produce and interpret sounds a lot like we do. They began by constructing a mannequin of the human vocal tract that simulates how vibrations from the voice field are formed by the throat, tongue, and lips. Then, they used a cognitively-inspired AI algorithm to management this vocal tract mannequin and make it produce imitations, making an allowance for the context-specific ways in which humans select to communicate sound.

    The mannequin can successfully take many sounds from the world and generate a human-like imitation of them — together with noises like leaves rustling, a snake’s hiss, and an approaching ambulance siren. Their mannequin can be run in reverse to guess real-world sounds from human vocal imitations, comparable to how some laptop imaginative and prescient techniques can retrieve high-quality photographs based mostly on sketches. For occasion, the mannequin can accurately distinguish the sound of a human imitating a cat’s “meow” versus its “hiss.”

    In the long run, this mannequin might doubtlessly lead to extra intuitive “imitation-based” interfaces for sound designers, extra human-like AI characters in digital actuality, and even strategies to assist college students be taught new languages.

    The co-lead authors — MIT CSAIL PhD college students Kartik Chandra SM ’23 and Karima Ma, and undergraduate researcher Matthew Caren — notice that laptop graphics researchers have lengthy acknowledged that realism is never the last word aim of visible expression. For instance, an summary portray or a toddler’s crayon doodle could be simply as expressive as {a photograph}.

    “Over the past few decades, advances in sketching algorithms have led to new tools for artists, advances in AI and computer vision, and even a deeper understanding of human cognition,” notes Chandra. “In the same way that a sketch is an abstract, non-photorealistic representation of an image, our method captures the abstract, non-phono–realistic ways humans express the sounds they hear. This teaches us about the process of auditory abstraction.”

    Play video

    “The goal of this project has been to understand and computationally model vocal imitation, which we take to be the sort of auditory equivalent of sketching in the visual domain,” says Caren.

    The artwork of imitation, in three elements

    The crew developed three more and more nuanced variations of the mannequin to evaluate to human vocal imitations. First, they created a baseline mannequin that merely aimed to generate imitations that had been as comparable to real-world sounds as doable — however this mannequin didn’t match human conduct very properly.

    The researchers then designed a second “communicative” mannequin. According to Caren, this mannequin considers what’s distinctive a few sound to a listener. For occasion, you’d possible imitate the sound of a motorboat by mimicking the rumble of its engine, since that’s its most distinctive auditory characteristic, even when it’s not the loudest side of the sound (in contrast to, say, the water splashing). This second mannequin created imitations that had been higher than the baseline, however the crew needed to enhance it much more.

    To take their technique a step additional, the researchers added a remaining layer of reasoning to the mannequin. “Vocal imitations can sound different based on the amount of effort you put into them. It costs time and energy to produce sounds that are perfectly accurate,” says Chandra. The researchers’ full mannequin accounts for this by making an attempt to keep away from utterances which can be very speedy, loud, or high- or low-pitched, which individuals are much less possible to use in a dialog. The outcome: extra human-like imitations that intently match most of the selections that humans make when imitating the identical sounds.

    After constructing this mannequin, the crew carried out a behavioral experiment to see whether or not the AI- or human-generated vocal imitations had been perceived as higher by human judges. Notably, members within the experiment favored the AI mannequin 25 % of the time basically, and as a lot as 75 % for an imitation of a motorboat and 50 % for an imitation of a gunshot.

    Toward extra expressive sound expertise

    Passionate about expertise for music and artwork, Caren envisions that this mannequin might assist artists higher communicate sounds to computational techniques and help filmmakers and different content material creators with producing AI sounds which can be extra nuanced to a selected context. It might additionally allow a musician to quickly search a sound database by imitating a noise that’s tough to describe in, say, a textual content immediate.

    In the meantime, Caren, Chandra, and Ma are wanting on the implications of their mannequin in different domains, together with the event of language, how infants be taught to discuss, and even imitation behaviors in birds like parrots and songbirds.

    The crew nonetheless has work to do with the present iteration of their mannequin: It struggles with some consonants, like “z,” which led to inaccurate impressions of some sounds, like bees buzzing. They can also’t but replicate how humans imitate speech, music, or sounds which can be imitated in another way throughout totally different languages, like a heartbeat.

    Stanford University linguistics professor Robert Hawkins says that language is filled with onomatopoeia and phrases that mimic however don’t absolutely replicate the issues they describe, like the “meow” sound that very inexactly approximates the sound that cats make. “The processes that get us from the sound of a real cat to a word like ‘meow’ reveal a lot about the intricate interplay between physiology, social reasoning, and communication in the evolution of language,” says Hawkins, who wasn’t concerned within the CSAIL analysis. “This model presents an exciting step toward formalizing and testing theories of those processes, demonstrating that both physical constraints from the human vocal tract and social pressures from communication are needed to explain the distribution of vocal imitations.”

    Caren, Chandra, and Ma wrote the paper with two different CSAIL associates: Jonathan Ragan-Kelley, MIT Department of Electrical Engineering and Computer Science affiliate professor, and Joshua Tenenbaum, MIT Brain and Cognitive Sciences professor and Center for Brains, Minds, and Machines member. Their work was supported, partly, by the Hertz Foundation and the National Science Foundation. It was introduced at SIGGRAPH Asia in early December.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Fueling seamless AI at scale

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Technology

    Best Telescopes for Deep Space in 2023

    Many corporations featured on ReadWrite accomplice with us. Opinions are our personal, however compensation and…

    The Future

    CloudTop takes top $100K competition prize | Ztoog

    This 12 months’s MIT $100K Business Plan Contest drew a report 215 groups to compete…

    AI

    DeepMind Introduces AlphaDev: A Deep Reinforcement Learning Agent Which Discovers Faster Sorting Algorithms From Scratch

    From Artificial Intelligence and Data Analysis to Cryptography and Optimization, algorithms play an necessary function…

    Technology

    Experts detail how users are getting scammed on Facebook Marketplace; Meta says it plans a notification system to let users identify "scams around payment apps" (Amanda Hoover/Wired)

    Amanda Hoover / Wired: Experts detail how users are getting scammed on Facebook Marketplace; Meta…

    The Future

    NASA’s 2024 Budget Falls $2.3 Billion Below Requested Amount

    House and Senate appropriators have launched NASA’s ultimate spending invoice for the fiscal yr 2024,…

    Our Picks
    Crypto

    Staking technology provider Kiln raises $17 million in rare crypto funding round

    Technology

    WhatsApp rolls out voice message transcripts

    The Future

    Major US media outlets see steep decline in engagement on Meta

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,806)
    • Mobile (1,852)
    • Science (1,868)
    • Technology (1,804)
    • The Future (1,650)
    Most Popular
    Mobile

    17 apps unpublished by Google Play for blackmail and extortion must be deleted

    The Future

    Fitbit Atrial Fibrillation Detection: Company Received FDA Approval for New Way to Measure Irregular Heartbeats

    Science

    How Star Trek fans changed the name of NASA’s first space shuttle

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.