Close Menu
Ztoog
    What's Hot
    Technology

    AI is keeping GitHub chief legal officer Shelley McKinley busy

    AI

    Salesforce AI Introduces GlueGen: Revolutionizing Text-to-Image Models with Efficient Encoder Upgrades and Multimodal Capabilities

    Technology

    Niger’s military coup and the fallout, explained

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Drivers in fatal Ford BlueCruise crashes were likely distracted before impact

      Livestream FA Cup Soccer: Watch Newcastle vs. Man City From Anywhere

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

    • Technology

      Stop Editing Manually: 5 AI Tools in Photoshop You Should Be Using

      Laser 3D Printing Could Build Lunar Base Structures

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

    • Gadgets

      Goal Zero Yeti 1500 6G review: A rugged portable power station that isn’t afraid to get dirty

      How to Run Ethernet Cables to Your Router and Keep Them Tidy

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

    • Mobile

      How Affiliate Programs for Betting Apps Work Across MENA

      Samsung managed to tie Apple for first place in this one 2025 smartphone market report

      Need a power station? These two Anker ones are nearly half off

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

    • Science

      Anduril, the autonomous weapons maker, doubles the size of its space unit

      Florida can’t decide if its official saltwater mammal is a dolphin or a porpoise

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

    • AI

      NVIDIA Releases Nemotron 3 Super: A 120B Parameter Open-Source Hybrid Mamba-Attention MoE Model Delivering 5x Higher Throughput for Agentic AI

      A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster | Ztoog

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

    • Crypto

      Pundit Reveals Why Bitcoin Is Headed For Another Crash To $42,000

      Ethereum co-founder Jeffrey Wilcke sends $157M in ETH to Kraken after months of wallet silence

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

    Ztoog
    Home » Using large language models to augment video conferences with dynamic visuals – Ztoog
    AI

    Using large language models to augment video conferences with dynamic visuals – Ztoog

    Facebook Twitter Pinterest WhatsApp
    Using large language models to augment video conferences with dynamic visuals – Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Ruofei Du, Research Scientist, and Alex Olwal, Senior Staff Research Scientist, Google Augmented Reality

    Recent advances in video conferencing have considerably improved distant video communication by way of options like reside captioning and noise cancellation. However, there are numerous conditions the place dynamic visible augmentation could be helpful to higher convey advanced and nuanced info. For instance, when discussing what to order at a Japanese restaurant, your folks might share visuals that may assist you to really feel extra assured about ordering the “Sukiyaki”. Or when speaking about your latest household journey to San Francisco, you might have considered trying to present a photograph out of your private album.

    In “Visual Captions: Augmenting Verbal Communication With On-the-fly Visuals”, introduced at ACM CHI 2023, we introduce a system that makes use of verbal cues to augment synchronous video communication with real-time visuals. We fine-tuned a large language mannequin to proactively recommend related visuals in open-vocabulary conversations utilizing a dataset we curated for this objective. We open sourced Visual Captions as a part of the ARChat venture, which is designed for speedy prototyping of augmented communication with real-time transcription.

    Visual Captions facilitates verbal communication with real-time visuals. The system is even strong towards typical errors which will typically seem in real-time speech-to-text transcription. For instance, out of context, the transcription mannequin misunderstood the phrase “pier” as “pair”, however Visual Captions nonetheless recommends photographs of the Santa Monica Pier.

    Design area for augmenting verbal communication with dynamic visuals

    We invited 10 inside individuals, every with varied technical and non-technical backgrounds, together with software program engineers, researchers, UX designers, visible artists, college students, and so forth., to focus on their explicit wants and needs for a possible real-time visible augmentation service. In two periods, we launched low-fidelity prototypes of the envisioned system, adopted by video demos of the prevailing text-to-image programs. These discussions knowledgeable a design area with eight dimensions for visible augmentation of real-time conversations, labeled under as D1 to D8.

    Visual augmentations could possibly be synchronous or asynchronous with the dialog (D1: Temporal), could possibly be used for each expressing and understanding speech content material (D2: Subject), and could possibly be utilized utilizing a variety of various visible content material, visible sorts, and visible sources (D3: Visual). Such visible augmentation may fluctuate relying on the dimensions of the conferences (D4: Scale) and whether or not a gathering is in co-located or distant settings (D5: Space). These components additionally affect whether or not the visuals needs to be displayed privately, shared between individuals, or public to everybody (D6: Privacy). Participants additionally recognized alternative ways during which they want to work together with the system whereas having conversations (D7: Initiation). For instance, individuals proposed totally different ranges of “proactivity”, which signifies the diploma to which customers would love the mannequin to take the initiative. Finally, individuals envisioned totally different strategies of interplay, for instance, utilizing speech or gestures for enter. (D8: Interaction).

    Design area for augmenting verbal communication with dynamic visuals.

    Informed by this preliminary suggestions, we designed Visual Captions to deal with producing synchronous visuals of semantically related visible content material, sort, and supply. While individuals in these preliminary exploratory periods have been collaborating in one-to-one distant conversations, deployment of Visual Captions within the wild will typically be in one-to-many (e.g., a person giving a presentation to an viewers) and many-to-many situations (e.g., a dialogue amongst a number of individuals in a gathering).

    Because the visible that finest enhances a dialog relies upon strongly on the context of the dialogue, we would have liked a coaching set particular to this objective. So, we collected a dataset of 1595 quadruples of language (1), visible content material (2), sort (3), and supply (4) throughout a wide range of contexts, together with each day conversations, lectures, and journey guides. For instance, “I would love to see it!” corresponds to visible content material of “face smiling”, a visible sort of “emoji”, and visible supply of “public search”. “Did she tell you about our trip to Mexico?” corresponds to visible content material of “a photo from the trip to Mexico”, a visual type of “photo”, and visible supply of “personal album”. We publicly launched this VC1.5K dataset for the analysis group.

    Visual intent prediction mannequin

    To predict what visuals might complement a dialog, we skilled a visible intent prediction mannequin based mostly on a large language mannequin utilizing the VC1.5K dataset. For coaching, we parsed every visible intent into the format of “<Visual Type> of <Visual Content> from <Visual Source>“.

    {"immediate": "<Previous Two Sentences> →", 
      "completion": 
    "<Visual Type 1> of "<Visual Type 1> from "<Visual Source 1>;
     <Visual Type 2> of "<Visual Type 2> from "<Visual Source 2>; 
      ... "}
    

    Using this format, this technique can deal with open-vocabulary conversations and contextually predict visible content material, visible supply, and visible sort. Anecdotally, we discovered that it outperforms keyword-based approaches, which fail to deal with open-vocabulary examples like “Your aunt Amy will be visiting this Saturday,” and can’t recommend related visible sorts or visible sources.

    Examples of visible intent predictions by our mannequin.

    We used 1276 (80%) examples from the VC1.5K dataset for fine-tuning the large language mannequin and the remaining 319 (20%) examples as take a look at information. We measured the efficiency of the fine-tuned mannequin with the token accuracy metric, i.e., the proportion of tokens in a batch that have been accurately predicted by the mannequin. During coaching, our mannequin reached a coaching token accuracy of 97% and a validation token accuracy of 87%.

    Performance

    To consider the utility of the skilled Visual Captions mannequin, we invited 89 individuals to carry out 846 duties. They have been requested to present suggestions on a scale of “1 — Strongly Disagree” to “7 — Strongly Agree” for six qualitative statements. Most individuals most popular to have the visible throughout a dialog (Q1, 83% ≥ 5–Somewhat Agree). Moreover, they thought-about the displayed visuals to be helpful and informative (Q2, 82% ≥ 5–Somewhat Agree), high-quality (Q3, 82% ≥ 5–Somewhat Agree), and related to the unique speech (This fall, 84% ≥ 5–Somewhat Agree). Participants additionally discovered the expected visible sort (Q5, 87% ≥ 5–Somewhat Agree) and visible supply (Q6, 86% ≥ 5–Somewhat Agree) to be correct given the context of the corresponding dialog.

    Technical analysis outcomes of the visible prediction mannequin rated by examine individuals.

    With this fine-tuned visible intent prediction mannequin, we developed Visual Captions on the ARChat platform, which may add new interactive widgets immediately on the digicam streams of video conferencing platforms, equivalent to Google Meet. As proven within the system workflow under, Visual Captions routinely captures the consumer’s speech, retrieves the final sentences, feeds them into the visible intent prediction mannequin each 100 ms, retrieves related visuals, after which suggests visuals in actual time.

    System workflow of Visual Captions.

    Visual Captions supplies three ranges of proactivity when suggesting visuals:

    • Auto-display (high-proactivity): The system autonomously searches and shows visuals publicly to all assembly individuals. No consumer interplay required.
    • Auto-suggest (medium-proactivity): The prompt visuals are proven in a non-public scrolling view. A consumer then clicks a visible to show it publicly. In this mode, the system is proactively recommending visuals, however the consumer decides when and what to show.
    • On-demand-suggest (low-proactivity): The system will solely recommend visuals if a consumer presses the spacebar.

    Quantitative and qualitative analysis: User research

    We evaluated Visual Captions in each a managed lab examine (n = 26) and in-the-wild deployment research (n = 10). Participants discovered that real-time visuals facilitated reside conversations by serving to clarify unfamiliar ideas, resolve language ambiguities, and make conversations extra participating. Participants additionally reported totally different preferences for interacting with the system in-situ, and that various ranges of proactivity have been most popular in numerous social situations.

    Participants’ Task Load Index and Likert scale scores (from 1 – Strongly Disagree to 7 – Strongly Agree) of 4 conversations with out Visual Captions (“No VC”) and the three Visual Captions modes: auto-display, auto-suggest, and on-demand recommend.

    Conclusions and future instructions

    This work proposes a system for real-time visible augmentation of verbal communication, referred to as Visual Captions, that was skilled utilizing a dataset of 1595 visible intents collected from 246 individuals, masking 15 matter classes. We publicly launch the coaching dataset, VC1.5K to the analysis group to assist additional analysis on this area. We have additionally deployed Visual Captions in ARChat, which facilitates video conferences in Google Meet by transcribing conferences and augmenting the digicam video streams.

    Visual Captions represents a big step in direction of enhancing verbal communication with on-the-fly visuals. By understanding the significance of visible cues in on a regular basis conversations, we are able to create simpler communication instruments and enhance how individuals join.

    Acknowledgements

    This work is a collaboration throughout a number of groups at Google. Key contributors to the venture embrace Xingyu “Bruce” Liu, Vladimir Kirilyuk, Xiuxiu Yuan, Peggy Chi, Alex Olwal, and Ruofei Du.

    We would love to lengthen our thanks to these on the ARChat group who supplied help, together with Jason Mayes, Max Spear, Na Li, Jun Zhang, Jing Jin, Yuan Ren, Adarsh Kowdle, Ping Yu, Darcy Philippon, and Ezgi Oztelcan. We would additionally like to thank the many individuals with whom we have had insightful discussions and people who supplied suggestions on the manuscript, together with Eric Turner, Yinda Zhang, Feitong Tan, Danhang Tang, and Shahram Izadi. We would additionally like to thank our CHI reviewers for his or her insightful suggestions.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    NVIDIA Releases Nemotron 3 Super: A 120B Parameter Open-Source Hybrid Mamba-Attention MoE Model Delivering 5x Higher Throughput for Agentic AI

    AI

    A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster | Ztoog

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    AI

    Using reflections to see the world from new points of view | Ztoog

    As a automobile travels alongside a slender metropolis road, reflections off the shiny paint or…

    Crypto

    SlingShot DAO Launches Platform to Utilize DAO Votes for New Game Onboarding

    Share this text SlingShot DAO, a gaming thought launchpad, created a platform that goals to…

    The Future

    XII Emerging Technologies that are Helping SMBs Grow in 2023 and Beyond

    2023 is seeing Small and Medium Businesses thrive; essentially the most vital contributor appears to…

    Gadgets

    Wireless Plug and Play Microphone

    Content creation is rising multi folds, the true sauce of content material creation isn’t just…

    Gadgets

    Get a KitchenAid stand mixer for just $250 at Amazon

    We might earn income from the merchandise out there on this web page and take…

    Our Picks
    Crypto

    Alameda Research’s ex-CEO Caroline Ellison testifies, claims SBF directed her to commit crimes

    AI

    Meet FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

    Science

    A new algorithm could help detect landslides in minutes

    Categories
    • AI (1,562)
    • Crypto (1,829)
    • Gadgets (1,872)
    • Mobile (1,913)
    • Science (1,941)
    • Technology (1,864)
    • The Future (1,718)
    Most Popular
    Science

    The moon may enter a new geological period thanks to human activity

    Science

    JWST and Hubble take stunning image of the ‘Christmas tree’ cluster

    Science

    Astronomers puzzled by little red galaxies that seem impossibly dense

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.