Close Menu
Ztoog
    What's Hot
    Gadgets

    Hong Kong Tests Ground-Level Red Lights To Hold Back Phone-Distracted Walking

    Science

    Babies appear to be born with the ability to discern a beat in music

    Mobile

    This Galaxy XCover 6 Pro rival brings a removable battery and microSD support

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

      LiberNovo Omni: The World’s First Dynamic Ergonomic Chair

      Common Security Mistakes Made By Businesses and How to Avoid Them

    • Technology

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

      5 Skills Kids (and Adults) Need in an AI World – O’Reilly

      How To Come Back After A Layoff

    • Gadgets

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

      The market’s down, but this OpenAI for the stock market can help you trade up

    • Mobile

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

      Forget screens: more details emerge on the mysterious Jony Ive + OpenAI device

    • Science

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

      AI Is Eating Data Center Power Demand—and It’s Only Getting Worse

    • AI

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

      How AI is introducing errors into courtrooms

    • Crypto

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

      Senate advances GENIUS Act after cloture vote passes

    Ztoog
    Home » Using large language models to augment video conferences with dynamic visuals – Ztoog
    AI

    Using large language models to augment video conferences with dynamic visuals – Ztoog

    Facebook Twitter Pinterest WhatsApp
    Using large language models to augment video conferences with dynamic visuals – Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Ruofei Du, Research Scientist, and Alex Olwal, Senior Staff Research Scientist, Google Augmented Reality

    Recent advances in video conferencing have considerably improved distant video communication by way of options like reside captioning and noise cancellation. However, there are numerous conditions the place dynamic visible augmentation could be helpful to higher convey advanced and nuanced info. For instance, when discussing what to order at a Japanese restaurant, your folks might share visuals that may assist you to really feel extra assured about ordering the “Sukiyaki”. Or when speaking about your latest household journey to San Francisco, you might have considered trying to present a photograph out of your private album.

    In “Visual Captions: Augmenting Verbal Communication With On-the-fly Visuals”, introduced at ACM CHI 2023, we introduce a system that makes use of verbal cues to augment synchronous video communication with real-time visuals. We fine-tuned a large language mannequin to proactively recommend related visuals in open-vocabulary conversations utilizing a dataset we curated for this objective. We open sourced Visual Captions as a part of the ARChat venture, which is designed for speedy prototyping of augmented communication with real-time transcription.

    Visual Captions facilitates verbal communication with real-time visuals. The system is even strong towards typical errors which will typically seem in real-time speech-to-text transcription. For instance, out of context, the transcription mannequin misunderstood the phrase “pier” as “pair”, however Visual Captions nonetheless recommends photographs of the Santa Monica Pier.

    Design area for augmenting verbal communication with dynamic visuals

    We invited 10 inside individuals, every with varied technical and non-technical backgrounds, together with software program engineers, researchers, UX designers, visible artists, college students, and so forth., to focus on their explicit wants and needs for a possible real-time visible augmentation service. In two periods, we launched low-fidelity prototypes of the envisioned system, adopted by video demos of the prevailing text-to-image programs. These discussions knowledgeable a design area with eight dimensions for visible augmentation of real-time conversations, labeled under as D1 to D8.

    Visual augmentations could possibly be synchronous or asynchronous with the dialog (D1: Temporal), could possibly be used for each expressing and understanding speech content material (D2: Subject), and could possibly be utilized utilizing a variety of various visible content material, visible sorts, and visible sources (D3: Visual). Such visible augmentation may fluctuate relying on the dimensions of the conferences (D4: Scale) and whether or not a gathering is in co-located or distant settings (D5: Space). These components additionally affect whether or not the visuals needs to be displayed privately, shared between individuals, or public to everybody (D6: Privacy). Participants additionally recognized alternative ways during which they want to work together with the system whereas having conversations (D7: Initiation). For instance, individuals proposed totally different ranges of “proactivity”, which signifies the diploma to which customers would love the mannequin to take the initiative. Finally, individuals envisioned totally different strategies of interplay, for instance, utilizing speech or gestures for enter. (D8: Interaction).

    Design area for augmenting verbal communication with dynamic visuals.

    Informed by this preliminary suggestions, we designed Visual Captions to deal with producing synchronous visuals of semantically related visible content material, sort, and supply. While individuals in these preliminary exploratory periods have been collaborating in one-to-one distant conversations, deployment of Visual Captions within the wild will typically be in one-to-many (e.g., a person giving a presentation to an viewers) and many-to-many situations (e.g., a dialogue amongst a number of individuals in a gathering).

    Because the visible that finest enhances a dialog relies upon strongly on the context of the dialogue, we would have liked a coaching set particular to this objective. So, we collected a dataset of 1595 quadruples of language (1), visible content material (2), sort (3), and supply (4) throughout a wide range of contexts, together with each day conversations, lectures, and journey guides. For instance, “I would love to see it!” corresponds to visible content material of “face smiling”, a visible sort of “emoji”, and visible supply of “public search”. “Did she tell you about our trip to Mexico?” corresponds to visible content material of “a photo from the trip to Mexico”, a visual type of “photo”, and visible supply of “personal album”. We publicly launched this VC1.5K dataset for the analysis group.

    Visual intent prediction mannequin

    To predict what visuals might complement a dialog, we skilled a visible intent prediction mannequin based mostly on a large language mannequin utilizing the VC1.5K dataset. For coaching, we parsed every visible intent into the format of “<Visual Type> of <Visual Content> from <Visual Source>“.

    {"immediate": "<Previous Two Sentences> →", 
      "completion": 
    "<Visual Type 1> of "<Visual Type 1> from "<Visual Source 1>;
     <Visual Type 2> of "<Visual Type 2> from "<Visual Source 2>; 
      ... "}
    

    Using this format, this technique can deal with open-vocabulary conversations and contextually predict visible content material, visible supply, and visible sort. Anecdotally, we discovered that it outperforms keyword-based approaches, which fail to deal with open-vocabulary examples like “Your aunt Amy will be visiting this Saturday,” and can’t recommend related visible sorts or visible sources.

    Examples of visible intent predictions by our mannequin.

    We used 1276 (80%) examples from the VC1.5K dataset for fine-tuning the large language mannequin and the remaining 319 (20%) examples as take a look at information. We measured the efficiency of the fine-tuned mannequin with the token accuracy metric, i.e., the proportion of tokens in a batch that have been accurately predicted by the mannequin. During coaching, our mannequin reached a coaching token accuracy of 97% and a validation token accuracy of 87%.

    Performance

    To consider the utility of the skilled Visual Captions mannequin, we invited 89 individuals to carry out 846 duties. They have been requested to present suggestions on a scale of “1 — Strongly Disagree” to “7 — Strongly Agree” for six qualitative statements. Most individuals most popular to have the visible throughout a dialog (Q1, 83% ≥ 5–Somewhat Agree). Moreover, they thought-about the displayed visuals to be helpful and informative (Q2, 82% ≥ 5–Somewhat Agree), high-quality (Q3, 82% ≥ 5–Somewhat Agree), and related to the unique speech (This fall, 84% ≥ 5–Somewhat Agree). Participants additionally discovered the expected visible sort (Q5, 87% ≥ 5–Somewhat Agree) and visible supply (Q6, 86% ≥ 5–Somewhat Agree) to be correct given the context of the corresponding dialog.

    Technical analysis outcomes of the visible prediction mannequin rated by examine individuals.

    With this fine-tuned visible intent prediction mannequin, we developed Visual Captions on the ARChat platform, which may add new interactive widgets immediately on the digicam streams of video conferencing platforms, equivalent to Google Meet. As proven within the system workflow under, Visual Captions routinely captures the consumer’s speech, retrieves the final sentences, feeds them into the visible intent prediction mannequin each 100 ms, retrieves related visuals, after which suggests visuals in actual time.

    System workflow of Visual Captions.

    Visual Captions supplies three ranges of proactivity when suggesting visuals:

    • Auto-display (high-proactivity): The system autonomously searches and shows visuals publicly to all assembly individuals. No consumer interplay required.
    • Auto-suggest (medium-proactivity): The prompt visuals are proven in a non-public scrolling view. A consumer then clicks a visible to show it publicly. In this mode, the system is proactively recommending visuals, however the consumer decides when and what to show.
    • On-demand-suggest (low-proactivity): The system will solely recommend visuals if a consumer presses the spacebar.

    Quantitative and qualitative analysis: User research

    We evaluated Visual Captions in each a managed lab examine (n = 26) and in-the-wild deployment research (n = 10). Participants discovered that real-time visuals facilitated reside conversations by serving to clarify unfamiliar ideas, resolve language ambiguities, and make conversations extra participating. Participants additionally reported totally different preferences for interacting with the system in-situ, and that various ranges of proactivity have been most popular in numerous social situations.

    Participants’ Task Load Index and Likert scale scores (from 1 – Strongly Disagree to 7 – Strongly Agree) of 4 conversations with out Visual Captions (“No VC”) and the three Visual Captions modes: auto-display, auto-suggest, and on-demand recommend.

    Conclusions and future instructions

    This work proposes a system for real-time visible augmentation of verbal communication, referred to as Visual Captions, that was skilled utilizing a dataset of 1595 visible intents collected from 246 individuals, masking 15 matter classes. We publicly launch the coaching dataset, VC1.5K to the analysis group to assist additional analysis on this area. We have additionally deployed Visual Captions in ARChat, which facilitates video conferences in Google Meet by transcribing conferences and augmenting the digicam video streams.

    Visual Captions represents a big step in direction of enhancing verbal communication with on-the-fly visuals. By understanding the significance of visible cues in on a regular basis conversations, we are able to create simpler communication instruments and enhance how individuals join.

    Acknowledgements

    This work is a collaboration throughout a number of groups at Google. Key contributors to the venture embrace Xingyu “Bruce” Liu, Vladimir Kirilyuk, Xiuxiu Yuan, Peggy Chi, Alex Olwal, and Ruofei Du.

    We would love to lengthen our thanks to these on the ARChat group who supplied help, together with Jason Mayes, Max Spear, Na Li, Jun Zhang, Jing Jin, Yuan Ren, Adarsh Kowdle, Ping Yu, Darcy Philippon, and Ezgi Oztelcan. We would additionally like to thank the many individuals with whom we have had insightful discussions and people who supplied suggestions on the manuscript, together with Eric Turner, Yinda Zhang, Feitong Tan, Danhang Tang, and Shahram Izadi. We would additionally like to thank our CHI reviewers for his or her insightful suggestions.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    AI

    Study shows vision-language models can’t handle queries with negation words | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    AI

    Language to quadrupedal locomotion – Google Research Blog

    Posted by Yujin Tang and Wenhao Yu, Research Scientists, Google

    Gadgets

    6 Best Smart Shades, Blinds, and Curtains (2023)

    (*6*)Inside or Outside Mount: For the cleanest look, it’s best to set up your shades…

    Crypto

    Ethereum Price To Hit $10,000, ‘Just The Way The Chips Have Fallen,’ Analyst Says

    Crypto analyst and dealer Tyler Durden has revealed his bullish sentiment in direction of Ethereum…

    Technology

    Best Apple iPhone SE Cases for 2024

    Torras circumstances: Torras makes a number of circumstances with built-in kickstands, however they every have…

    Gadgets

    16 Best Soundbars for Every Budget (2023): Vizio, Sonos, Samsung, Yamaha, Sony

    There are quite a lot of nice soundbars on the market, and we don’t have…

    Our Picks
    Technology

    Where did the name “Bluetooth” come from?

    Science

    Parasitic worms are missing important gene

    Crypto

    Stablecoins are finding product-market fit in emerging markets

    Categories
    • AI (1,492)
    • Crypto (1,753)
    • Gadgets (1,804)
    • Mobile (1,850)
    • Science (1,865)
    • Technology (1,801)
    • The Future (1,647)
    Most Popular
    The Future

    PayPal faces new antitrust lawsuit claiming it unfairly stifles competition with Stripe, Shopify and more

    Science

    Quantum lidar could help driverless vehicles spot bright objects

    The Future

    Call of Duty: Black Ops 6 gets a live-action trailer starring 1990s world leaders

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.