Close Menu
Ztoog
    What's Hot
    Gadgets

    HP sued (again) for blocking third-party ink from printers, accused of monopoly

    The Future

    Sex in space? It might still be a distant possibility due to microgravity

    Crypto

    $50 Injection? Analyst Bullish on Injective After 15% Surge

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

      India-Pak conflict: Pak appoints ISI chief, appointment comes in backdrop of the Pahalgam attack

    • Technology

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

      The more Google kills Fitbit, the more I want a Fitbit Sense 3

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

      ‘Dark photon’ theory of light aims to tear up a century of physics

    • AI

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

      The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    • Crypto

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

      Speak at Ztoog Disrupt 2025: Applications now open

    Ztoog
    Home » Using large language models to augment video conferences with dynamic visuals – Ztoog
    AI

    Using large language models to augment video conferences with dynamic visuals – Ztoog

    Facebook Twitter Pinterest WhatsApp
    Using large language models to augment video conferences with dynamic visuals – Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Ruofei Du, Research Scientist, and Alex Olwal, Senior Staff Research Scientist, Google Augmented Reality

    Recent advances in video conferencing have considerably improved distant video communication by way of options like reside captioning and noise cancellation. However, there are numerous conditions the place dynamic visible augmentation could be helpful to higher convey advanced and nuanced info. For instance, when discussing what to order at a Japanese restaurant, your folks might share visuals that may assist you to really feel extra assured about ordering the “Sukiyaki”. Or when speaking about your latest household journey to San Francisco, you might have considered trying to present a photograph out of your private album.

    In “Visual Captions: Augmenting Verbal Communication With On-the-fly Visuals”, introduced at ACM CHI 2023, we introduce a system that makes use of verbal cues to augment synchronous video communication with real-time visuals. We fine-tuned a large language mannequin to proactively recommend related visuals in open-vocabulary conversations utilizing a dataset we curated for this objective. We open sourced Visual Captions as a part of the ARChat venture, which is designed for speedy prototyping of augmented communication with real-time transcription.

    Visual Captions facilitates verbal communication with real-time visuals. The system is even strong towards typical errors which will typically seem in real-time speech-to-text transcription. For instance, out of context, the transcription mannequin misunderstood the phrase “pier” as “pair”, however Visual Captions nonetheless recommends photographs of the Santa Monica Pier.

    Design area for augmenting verbal communication with dynamic visuals

    We invited 10 inside individuals, every with varied technical and non-technical backgrounds, together with software program engineers, researchers, UX designers, visible artists, college students, and so forth., to focus on their explicit wants and needs for a possible real-time visible augmentation service. In two periods, we launched low-fidelity prototypes of the envisioned system, adopted by video demos of the prevailing text-to-image programs. These discussions knowledgeable a design area with eight dimensions for visible augmentation of real-time conversations, labeled under as D1 to D8.

    Visual augmentations could possibly be synchronous or asynchronous with the dialog (D1: Temporal), could possibly be used for each expressing and understanding speech content material (D2: Subject), and could possibly be utilized utilizing a variety of various visible content material, visible sorts, and visible sources (D3: Visual). Such visible augmentation may fluctuate relying on the dimensions of the conferences (D4: Scale) and whether or not a gathering is in co-located or distant settings (D5: Space). These components additionally affect whether or not the visuals needs to be displayed privately, shared between individuals, or public to everybody (D6: Privacy). Participants additionally recognized alternative ways during which they want to work together with the system whereas having conversations (D7: Initiation). For instance, individuals proposed totally different ranges of “proactivity”, which signifies the diploma to which customers would love the mannequin to take the initiative. Finally, individuals envisioned totally different strategies of interplay, for instance, utilizing speech or gestures for enter. (D8: Interaction).

    Design area for augmenting verbal communication with dynamic visuals.

    Informed by this preliminary suggestions, we designed Visual Captions to deal with producing synchronous visuals of semantically related visible content material, sort, and supply. While individuals in these preliminary exploratory periods have been collaborating in one-to-one distant conversations, deployment of Visual Captions within the wild will typically be in one-to-many (e.g., a person giving a presentation to an viewers) and many-to-many situations (e.g., a dialogue amongst a number of individuals in a gathering).

    Because the visible that finest enhances a dialog relies upon strongly on the context of the dialogue, we would have liked a coaching set particular to this objective. So, we collected a dataset of 1595 quadruples of language (1), visible content material (2), sort (3), and supply (4) throughout a wide range of contexts, together with each day conversations, lectures, and journey guides. For instance, “I would love to see it!” corresponds to visible content material of “face smiling”, a visible sort of “emoji”, and visible supply of “public search”. “Did she tell you about our trip to Mexico?” corresponds to visible content material of “a photo from the trip to Mexico”, a visual type of “photo”, and visible supply of “personal album”. We publicly launched this VC1.5K dataset for the analysis group.

    Visual intent prediction mannequin

    To predict what visuals might complement a dialog, we skilled a visible intent prediction mannequin based mostly on a large language mannequin utilizing the VC1.5K dataset. For coaching, we parsed every visible intent into the format of “<Visual Type> of <Visual Content> from <Visual Source>“.

    {"immediate": "<Previous Two Sentences> →", 
      "completion": 
    "<Visual Type 1> of "<Visual Type 1> from "<Visual Source 1>;
     <Visual Type 2> of "<Visual Type 2> from "<Visual Source 2>; 
      ... "}
    

    Using this format, this technique can deal with open-vocabulary conversations and contextually predict visible content material, visible supply, and visible sort. Anecdotally, we discovered that it outperforms keyword-based approaches, which fail to deal with open-vocabulary examples like “Your aunt Amy will be visiting this Saturday,” and can’t recommend related visible sorts or visible sources.

    Examples of visible intent predictions by our mannequin.

    We used 1276 (80%) examples from the VC1.5K dataset for fine-tuning the large language mannequin and the remaining 319 (20%) examples as take a look at information. We measured the efficiency of the fine-tuned mannequin with the token accuracy metric, i.e., the proportion of tokens in a batch that have been accurately predicted by the mannequin. During coaching, our mannequin reached a coaching token accuracy of 97% and a validation token accuracy of 87%.

    Performance

    To consider the utility of the skilled Visual Captions mannequin, we invited 89 individuals to carry out 846 duties. They have been requested to present suggestions on a scale of “1 — Strongly Disagree” to “7 — Strongly Agree” for six qualitative statements. Most individuals most popular to have the visible throughout a dialog (Q1, 83% ≥ 5–Somewhat Agree). Moreover, they thought-about the displayed visuals to be helpful and informative (Q2, 82% ≥ 5–Somewhat Agree), high-quality (Q3, 82% ≥ 5–Somewhat Agree), and related to the unique speech (This fall, 84% ≥ 5–Somewhat Agree). Participants additionally discovered the expected visible sort (Q5, 87% ≥ 5–Somewhat Agree) and visible supply (Q6, 86% ≥ 5–Somewhat Agree) to be correct given the context of the corresponding dialog.

    Technical analysis outcomes of the visible prediction mannequin rated by examine individuals.

    With this fine-tuned visible intent prediction mannequin, we developed Visual Captions on the ARChat platform, which may add new interactive widgets immediately on the digicam streams of video conferencing platforms, equivalent to Google Meet. As proven within the system workflow under, Visual Captions routinely captures the consumer’s speech, retrieves the final sentences, feeds them into the visible intent prediction mannequin each 100 ms, retrieves related visuals, after which suggests visuals in actual time.

    System workflow of Visual Captions.

    Visual Captions supplies three ranges of proactivity when suggesting visuals:

    • Auto-display (high-proactivity): The system autonomously searches and shows visuals publicly to all assembly individuals. No consumer interplay required.
    • Auto-suggest (medium-proactivity): The prompt visuals are proven in a non-public scrolling view. A consumer then clicks a visible to show it publicly. In this mode, the system is proactively recommending visuals, however the consumer decides when and what to show.
    • On-demand-suggest (low-proactivity): The system will solely recommend visuals if a consumer presses the spacebar.

    Quantitative and qualitative analysis: User research

    We evaluated Visual Captions in each a managed lab examine (n = 26) and in-the-wild deployment research (n = 10). Participants discovered that real-time visuals facilitated reside conversations by serving to clarify unfamiliar ideas, resolve language ambiguities, and make conversations extra participating. Participants additionally reported totally different preferences for interacting with the system in-situ, and that various ranges of proactivity have been most popular in numerous social situations.

    Participants’ Task Load Index and Likert scale scores (from 1 – Strongly Disagree to 7 – Strongly Agree) of 4 conversations with out Visual Captions (“No VC”) and the three Visual Captions modes: auto-display, auto-suggest, and on-demand recommend.

    Conclusions and future instructions

    This work proposes a system for real-time visible augmentation of verbal communication, referred to as Visual Captions, that was skilled utilizing a dataset of 1595 visible intents collected from 246 individuals, masking 15 matter classes. We publicly launch the coaching dataset, VC1.5K to the analysis group to assist additional analysis on this area. We have additionally deployed Visual Captions in ARChat, which facilitates video conferences in Google Meet by transcribing conferences and augmenting the digicam video streams.

    Visual Captions represents a big step in direction of enhancing verbal communication with on-the-fly visuals. By understanding the significance of visible cues in on a regular basis conversations, we are able to create simpler communication instruments and enhance how individuals join.

    Acknowledgements

    This work is a collaboration throughout a number of groups at Google. Key contributors to the venture embrace Xingyu “Bruce” Liu, Vladimir Kirilyuk, Xiuxiu Yuan, Peggy Chi, Alex Olwal, and Ruofei Du.

    We would love to lengthen our thanks to these on the ARChat group who supplied help, together with Jason Mayes, Max Spear, Na Li, Jun Zhang, Jing Jin, Yuan Ren, Adarsh Kowdle, Ping Yu, Darcy Philippon, and Ezgi Oztelcan. We would additionally like to thank the many individuals with whom we have had insightful discussions and people who supplied suggestions on the manuscript, together with Eric Turner, Yinda Zhang, Feitong Tan, Danhang Tang, and Shahram Izadi. We would additionally like to thank our CHI reviewers for his or her insightful suggestions.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    AI

    “Periodic table of machine learning” could fuel AI discovery | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Science

    Meet the Designer Behind Neuralink’s Surgical Robot

    As a designer, what security concerns did you need to take into consideration with the…

    Crypto

    Steve Cohen-backed NFT Platform Recur to Close Doors After Raising $50M Just 2 Years Ago

    Share this text Recur, a distinguished NFT platform backed by hedge fund mogul Steve Cohen,…

    Technology

    Android 14 QPR2 beta 2 is here to fix bugs and improve performance

    (*2*)Edgar Cervantes / Android AuthorityTL;DR Google has introduced it is rolling out the second Android…

    Mobile

    The Android 14 beta is the buggiest beta I’ve ever installed on my Pixels

    (*14*)Edgar Cervantes / Android Authority Ever since I flashed Android 4.0 Ice Cream Sandwich on…

    Gadgets

    Apple introduces new M3 chip lineup, starting with the M3, M3 Pro, and M3 Max

    Enlarge / Apple is introducing three M3 efficiency tiers at the similar time. Apple NEW…

    Our Picks
    Crypto

    KuCoin’s Alicia Kao Shares Insights on How AI is Accelerating Mass Crypto Adoption at TOKEN2049 Singapore

    Technology

    The current risks to the economy, from inflation to jobs to student loans, explained

    Crypto

    Values Drop 60% After Holiday Frenzy

    Categories
    • AI (1,482)
    • Crypto (1,744)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,853)
    • Technology (1,789)
    • The Future (1,635)
    Most Popular
    Mobile

    What is Sony Pictures Core, and how much does it cost?

    Technology

    Can the U.S. Make Solar Panels? This Company Thinks So.

    The Future

    Griffin Bank has a license to thrill

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.