Close Menu
Ztoog
    What's Hot
    Science

    Doctors on Bikes Prevented a Humanitarian Catastrophe in Ukraine

    The Future

    Jack Dorsey’s Twitter alternative Bluesky makes debut on Android

    Technology

    Android 15’s voice activation feature could let you launch ChatGPT hands-free

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Unsupervised speech-to-speech translation from monolingual data – Google Research Blog
    AI

    Unsupervised speech-to-speech translation from monolingual data – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Unsupervised speech-to-speech translation from monolingual data – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Eliya Nachmani, Research Scientist, and Michelle Tadmor Ramanovich, Software Engineer, Google Research

    Speech-to-speech translation (S2ST) is a sort of machine translation that converts spoken language from one language to a different. This expertise has the potential to interrupt down language obstacles and facilitate communication between folks from totally different cultures and backgrounds.

    Previously, we launched Translatotron 1 and Translatotron 2, the primary ever fashions that had been in a position to immediately translate speech between two languages. However they had been educated in supervised settings with parallel speech data. The shortage of parallel speech data is a serious problem on this discipline, a lot that the majority public datasets are semi- or fully-synthesized from textual content. This provides extra hurdles to studying translation and reconstruction of speech attributes that aren’t represented within the textual content and are thus not mirrored within the synthesized coaching data.

    Here we current Translatotron 3, a novel unsupervised speech-to-speech translation structure. In Translatotron 3, we present that it’s doable to study a speech-to-speech translation process from monolingual data alone. This methodology opens the door not solely to translation between extra language pairs but additionally in the direction of translation of the non-textual speech attributes akin to pauses, talking charges, and speaker id. Our methodology doesn’t embrace any direct supervision to focus on languages and due to this fact we consider it’s the proper route for paralinguistic traits (e.g., akin to tone, emotion) of the supply speech to be preserved throughout translation. To allow speech-to-speech translation, we use back-translation, which is a method from unsupervised machine translation (UMT) the place an artificial translation of the supply language is used to translate texts with out bilingual textual content datasets. Experimental ends in speech-to-speech translation duties between Spanish and English present that Translatotron 3 outperforms a baseline cascade system.

    Translatotron 3

    Translatotron 3 addresses the issue of unsupervised S2ST, which might eradicate the requirement for bilingual speech datasets. To do that, Translatotron 3’s design incorporates three key points:

    1. Pre-training all the mannequin as a masked autoencoder with SpecAugment, a easy data augmentation methodology for speech recognition that operates on the logarithmic mel spectogram of the enter audio (as a substitute of the uncooked audio itself) and is proven to successfully enhance the generalization capabilities of the encoder.
    2. Unsupervised embedding mapping based mostly on multilingual unsupervised embeddings (MUSE), which is educated on unpaired languages however permits the mannequin to study an embedding house that’s shared between the supply and goal languages.
    3. A reconstruction loss based mostly on back-translation, to coach an encoder-decoder direct S2ST mannequin in a totally unsupervised method.

    The mannequin is educated utilizing a mix of the unsupervised MUSE embedding loss, reconstruction loss, and S2S back-translation loss. During inference, the shared encoder is utilized to encode the enter right into a multilingual embedding house, which is subsequently decoded by the goal language decoder.

    Architecture

    Translatotron 3 employs a shared encoder to encode each the supply and goal languages. The decoder consists of a linguistic decoder, an acoustic synthesizer (answerable for acoustic era of the translation speech), and a singular consideration module, like Translatotron 2. However, for Translatotron 3 there are two decoders, one for the supply language and one other for the goal language. During coaching, we use monolingual speech-text datasets (i.e., these data are made up of speech-text pairs; they’re not translations).

    Encoder

    The encoder has the identical structure because the speech encoder within the Translatotron 2. The output of the encoder is break up into two elements: the primary half incorporates semantic info whereas the second half incorporates acoustic info. By utilizing the MUSE loss, the primary half of the output is educated to be the MUSE embeddings of the textual content of the enter speech spectrogram. The latter half is up to date with out the MUSE loss. It is essential to notice that the identical encoder is shared between supply and goal languages. Furthermore, the MUSE embedding is multilingual in nature. As a consequence, the encoder is ready to study a multilingual embedding house throughout supply and goal languages. This permits a extra environment friendly and efficient encoding of the enter, because the encoder is ready to encode speech from each languages into a standard embedding house, quite than sustaining a separate embedding house for every language.

    Decoder

    Like Translatotron 2, the decoder consists of three distinct elements, specifically the linguistic decoder, the acoustic synthesizer, and the eye module. To successfully deal with the totally different properties of the supply and goal languages, nevertheless, Translatotron 3 has two separate decoders, for the supply and goal languages.

    Two half coaching

    The coaching methodology consists of two elements: (1) auto-encoding with reconstruction and (2) a back-translation time period. In the primary half, the community is educated to auto-encode the enter to a multilingual embedding house utilizing the MUSE loss and the reconstruction loss. This part goals to make sure that the community generates significant multilingual representations. In the second half, the community is additional educated to translate the enter spectrogram by using the back-translation loss. To mitigate the problem of catastrophic forgetting and imposing the latent house to be multilingual, the MUSE loss and the reconstruction loss are additionally utilized on this second a part of coaching. To make sure that the encoder learns significant properties of the enter, quite than merely reconstructing the enter, we apply SpecAugment to encoder enter at each phases. It has been proven to successfully enhance the generalization capabilities of the encoder by augmenting the enter data.

    Training goal

    During the back-translation coaching part (illustrated within the part under), the community is educated to translate the enter spectrogram to the goal language after which again to the supply language. The aim of back-translation is to implement the latent house to be multilingual. To obtain this, the next losses are utilized:

    • MUSE loss: The MUSE loss measures the similarity between the multilingual embedding of the enter spectrogram and the multilingual embedding of the back-translated spectrogram.
    • Reconstruction loss: The reconstruction loss measures the similarity between the enter spectrogram and the back-translated spectrogram.

    In addition to those losses, SpecAugment is utilized to the encoder enter at each phases. Before the back-translation coaching part, the community is educated to auto-encode the enter to a multilingual embedding house utilizing the MUSE loss and reconstruction loss.

    MUSE loss

    To make sure that the encoder generates multilingual representations which are significant for each decoders, we make use of a MUSE loss throughout coaching. The MUSE loss forces the encoder to generate such a illustration by utilizing pre-trained MUSE embeddings. During the coaching course of, given an enter textual content transcript, we extract the corresponding MUSE embeddings from the embeddings of the enter language. The error between MUSE embeddings and the output vectors of the encoder is then minimized. Note that the encoder is detached to the language of the enter throughout inference because of the multilingual nature of the embeddings.

    The coaching and inference in Translatotron 3. Training consists of the reconstruction loss by way of the auto-encoding path and employs the reconstruction loss by way of back-translation.

    Audio samples

    Following are examples of direct speech-to-speech translation from Translatotron 3:

    Spanish-to-English (on Conversational dataset)

    Input (Spanish)
    TTS-synthesized reference (English)   
    Translatotron 3 (English)

    Spanish-to-English (on CommonVoice11 Synthesized dataset)

    Input (Spanish)
    TTS-synthesized reference (English)   
    Translatotron 3 (English)

    Spanish-to-English (on CommonVoice11 dataset)

    Input (Spanish)
    TTS reference (English)
    Translatotron 3 (English)   

    Performance

    To empirically consider the efficiency of the proposed method, we performed experiments on English and Spanish utilizing varied datasets, together with the Common Voice 11 dataset, in addition to two synthesized datasets derived from the Conversational and Common Voice 11 datasets.

    The translation high quality was measured by BLEU (increased is healthier) on ASR (computerized speech recognition) transcriptions from the translated speech, in comparison with the corresponding reference translation textual content. Whereas, the speech high quality is measured by the MOS rating (increased is healthier). Furthermore, the speaker similarity is measured by the common cosine similarity (increased is healthier).

    Because Translatotron 3 is an unsupervised methodology, as a baseline we used a cascaded S2ST system that’s mixed from ASR, unsupervised machine translation (UMT), and TTS (text-to-speech). Specifically, we make use of UMT that makes use of the closest neighbor within the embedding house so as to create the translation.

    Translatotron 3 outperforms the baseline by massive margins in each side we measured: translation high quality, speaker similarity, and speech high quality. It notably excelled on the conversational corpus. Moreover, Translatotron 3 achieves speech naturalness just like that of the bottom reality audio samples (measured by MOS, increased is healthier).

    Translation high quality (measured by BLEU, the place increased is healthier) evaluated on three Spanish-English corpora.
    Speech similarity (measured by common cosine similarity between enter speaker and output speaker, the place increased is healthier) evaluated on three Spanish-English corpora.
    Mean-opinion-score (measured by common MOS metric, the place increased is healthier) evaluated on three Spanish-English corpora.

    Future work

    As future work, we want to lengthen the work to extra languages and examine whether or not zero-shot S2ST will be utilized with the back-translation method. We would additionally like to look at using back-translation with several types of speech data, akin to noisy speech and low-resource languages.

    Acknowledgments

    The direct contributors to this work embrace Eliya Nachmani, Alon Levkovitch, Yifan Ding, Chulayutsh Asawaroengchai, Heiga Zhen, and Michelle Tadmor Ramanovich. We additionally thank Yu Zhang, Yuma Koizumi, Soroosh Mariooryad, RJ Skerry-Ryan, Neil Zeghidour, Christian Frank, Marco Tagliasacchi, Nadav Bar, Benny Schlesinger and Yonghui Wu.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    Apple Maps vs Google Maps: How their offline downloadable maps compare

    Enlarge / With offline maps, Apple has achieved parity with Google Maps on this one…

    AI

    OpenAI Introduces Sora: The Future of Video Generation with AI

    The digital content material creation panorama is present process a exceptional transformation, and the introduction…

    AI

    Top AI Tools for Accounting 2023

    Vic.ai makes use of AI to care for the funds. The essential info from an…

    Gadgets

    Google News is shutting down purchased magazine content, offering refunds

    Google / Ron Amadeo Google kills product View extra tales Google News is identified to…

    Science

    Tiny magnet could help measure gravity on the quantum scale

    All objects exert a gravitational pull, irrespective of how smallKarl Dolenc/BeholdingEye/Getty Images A tool that…

    Our Picks
    AI

    New open-source tool helps to detangle the brain | Ztoog

    The Future

    The Steam Deck Is Now at Its Lowest Price Ever

    Mobile

    Power users’ dream tablet 12.9 iPad Pro is a massive $400 off

    Categories
    • AI (1,493)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,867)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    AI

    Large sequence models for software development activities – Ztoog

    Mobile

    The best Google Pixel Watch 2 bands

    Crypto

    Bitcoin Plunges Below $27,000, Which Holder Groups Are Selling?

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.