Close Menu
Ztoog
    What's Hot
    Crypto

    Bitcoin Whales Are Cashing Out Amid Push To New All-Time High

    AI

    Microsoft Researchers Introduce KOSMOS-2: A Multimodal Large Language Model That Can Ground To The Visual World

    Crypto

    Ethereum Daily Users Plunge Amid Declining Network Revenue

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

      India-Pak conflict: Pak appoints ISI chief, appointment comes in backdrop of the Pahalgam attack

    • Technology

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

      The more Google kills Fitbit, the more I want a Fitbit Sense 3

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

      ‘Dark photon’ theory of light aims to tear up a century of physics

    • AI

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

      The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    • Crypto

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

      Speak at Ztoog Disrupt 2025: Applications now open

    Ztoog
    Home » A foundational visual encoder for video understanding – Google Research Blog
    AI

    A foundational visual encoder for video understanding – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    A foundational visual encoder for video understanding – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Long Zhao, Senior Research Scientist, and Ting Liu, Senior Staff Software Engineer, Google Research

    An astounding variety of movies can be found on the Web, masking a wide range of content material from on a regular basis moments folks share to historic moments to scientific observations, every of which comprises a singular file of the world. The proper instruments might assist researchers analyze these movies, remodeling how we perceive the world round us.

    Videos supply dynamic visual content material much more wealthy than static pictures, capturing motion, modifications, and dynamic relationships between entities. Analyzing this complexity, together with the immense range of publicly out there video knowledge, calls for fashions that transcend conventional picture understanding. Consequently, lots of the approaches that finest carry out on video understanding nonetheless depend on specialised fashions tailored for explicit duties. Recently, there was thrilling progress on this space utilizing video basis fashions (ViFMs), similar to VideoCLIP, InternVideo, VideoCoCa, and UMT). However, constructing a ViFM that handles the sheer range of video knowledge stays a problem.

    With the purpose of constructing a single mannequin for general-purpose video understanding, we launched “VideoPrism: A Foundational Visual Encoder for Video Understanding”. VideoPrism is a ViFM designed to deal with a large spectrum of video understanding duties, together with classification, localization, retrieval, captioning, and query answering (QA). We suggest improvements in each the pre-training knowledge in addition to the modeling technique. We pre-train VideoPrism on a large and various dataset: 36 million high-quality video-text pairs and 582 million video clips with noisy or machine-generated parallel textual content. Our pre-training strategy is designed for this hybrid knowledge, to be taught each from video-text pairs and the movies themselves. VideoPrism is extremely simple to adapt to new video understanding challenges, and achieves state-of-the-art efficiency utilizing a single frozen mannequin.

    VideoPrism is a general-purpose video encoder that permits state-of-the-art outcomes over a large spectrum of video understanding duties, together with classification, localization, retrieval, captioning, and query answering, by producing video representations from a single frozen mannequin.

    Pre-training knowledge

    A highly effective ViFM wants a really massive assortment of movies on which to coach — much like different basis fashions (FMs), similar to these for massive language fashions (LLMs). Ideally, we might need the pre-training knowledge to be a consultant pattern of all of the movies on this planet. While naturally most of those movies wouldn’t have good captions or descriptions, even imperfect textual content can present helpful details about the semantic content material of the video.

    To give our mannequin the very best place to begin, we put collectively a large pre-training corpus consisting of a number of private and non-private datasets, together with YT-Temporal-180M, InternVid, VideoCC, WTS-70M, and many others. This contains 36 million rigorously chosen movies with high-quality captions, together with a further 582 million clips with various ranges of noisy textual content (like auto-generated transcripts). To our data, that is the most important and most various video coaching corpus of its type.

    Statistics on the video-text pre-training knowledge. The massive variations of the CLIP similarity scores (the upper, the higher) show the various caption high quality of our pre-training knowledge, which is a byproduct of the assorted methods used to reap the textual content.

    Two-stage coaching

    The VideoPrism mannequin structure stems from the usual imaginative and prescient transformer (ViT) with a factorized design that sequentially encodes spatial and temporal info following ViViT. Our coaching strategy leverages each the high-quality video-text knowledge and the video knowledge with noisy textual content talked about above. To begin, we use contrastive studying (an strategy that minimizes the space between optimistic video-text pairs whereas maximizing the space between destructive video-text pairs) to show our mannequin to match movies with their very own textual content descriptions, together with imperfect ones. This builds a basis for matching semantic language content material to visual content material.

    After video-text contrastive coaching, we leverage the gathering of movies with out textual content descriptions. Here, we construct on the masked video modeling framework to foretell masked patches in a video, with a couple of enhancements. We practice the mannequin to foretell each the video-level world embedding and token-wise embeddings from the first-stage mannequin to successfully leverage the data acquired in that stage. We then randomly shuffle the anticipated tokens to forestall the mannequin from studying shortcuts.

    What is exclusive about VideoPrism’s setup is that we use two complementary pre-training indicators: textual content descriptions and the visual content material inside a video. Text descriptions usually give attention to what issues appear like, whereas the video content material offers details about motion and visual dynamics. This permits VideoPrism to excel in duties that demand an understanding of each look and movement.

    Results

    We carried out intensive analysis on VideoPrism throughout 4 broad classes of video understanding duties, together with video classification and localization, video-text retrieval, video captioning, query answering, and scientific video understanding. VideoPrism achieves state-of-the-art efficiency on 30 out of 33 video understanding benchmarks — all with minimal adaptation of a single, frozen mannequin.

    VideoPrism in comparison with the earlier best-performing FMs.

    Classification and localization

    We consider VideoPrism on an current large-scale video understanding benchmark (VideoGLUE) masking classification and localization duties. We discovered that (1) VideoPrism outperforms all the different state-of-the-art FMs, and (2) no different single mannequin persistently got here in second place. This tells us that VideoPrism has discovered to successfully pack a wide range of video indicators into one encoder — from semantics at totally different granularities to look and movement cues — and it really works effectively throughout a wide range of video sources.

    Combining with LLMs

    We additional discover combining VideoPrism with LLMs to unlock its capacity to deal with varied video-language duties. In explicit, when paired with a textual content encoder (following LiT) or a language decoder (similar to PaLM-2), VideoPrism will be utilized for video-text retrieval, video captioning, and video QA duties. We evaluate the mixed fashions on a broad and difficult set of vision-language benchmarks. VideoPrism units the brand new cutting-edge on most benchmarks. From the visual outcomes, we discover that VideoPrism is able to understanding advanced motions and appearances in movies (e.g., the mannequin can acknowledge the totally different colours of spinning objects on the window within the visual examples under). These outcomes show that VideoPrism is strongly suitable with language fashions.



    We present qualitative outcomes utilizing VideoPrism with a textual content encoder for video-text retrieval (first row) and tailored to a language decoder for video QA (second and third row). For video-text retrieval examples, the blue bars point out the embedding similarities between the movies and the textual content queries.

    Scientific functions

    Finally, we examined VideoPrism on datasets utilized by scientists throughout domains, together with fields similar to ethology, behavioral neuroscience, and ecology. These datasets sometimes require area experience to annotate, for which we leverage current scientific datasets open-sourced by the neighborhood together with Fly vs. Fly, CalMS21, ChimpACT, and KABR. VideoPrism not solely performs exceptionally effectively, however really surpasses fashions designed particularly for these duties. This suggests instruments like VideoPrism have the potential to rework how scientists analyze video knowledge throughout totally different fields.

    VideoPrism outperforms the area specialists on varied scientific benchmarks. We present absolutely the rating variations to spotlight the relative enhancements of VideoPrism. We report imply common precision (mAP) for all datasets, besides for KABR which makes use of class-averaged top-1 accuracy.

    Conclusion

    With VideoPrism, we introduce a strong and versatile video encoder that units a brand new commonplace for general-purpose video understanding. Our emphasis on each constructing a large and diversified pre-training dataset and revolutionary modeling strategies has been validated by means of our intensive evaluations. Not solely does VideoPrism persistently outperform sturdy baselines, however its distinctive capacity to generalize positions it effectively for tackling an array of real-world functions. Because of its potential broad use, we’re dedicated to persevering with additional accountable analysis on this house, guided by our AI Principles. We hope VideoPrism paves the way in which for future breakthroughs on the intersection of AI and video evaluation, serving to to appreciate the potential of ViFMs throughout domains similar to scientific discovery, training, and healthcare.

    Acknowledgements

    This weblog submit is made on behalf of all of the VideoPrism authors: Long Zhao, Nitesh B. Gundavarapu, Liangzhe Yuan, Hao Zhou, Shen Yan, Jennifer J. Sun, Luke Friedman, Rui Qian, Tobias Weyand, Yue Zhao, Rachel Hornung, Florian Schroff, Ming-Hsuan Yang, David A. Ross, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko, Ting Liu, and Boqing Gong. We sincerely thank David Hendon for their product administration efforts, and Alex Siegman, Ramya Ganeshan, and Victor Gomes for their program and useful resource administration efforts. We additionally thank Hassan Akbari, Sherry Ben, Yoni Ben-Meshulam, Chun-Te Chu, Sam Clearwater, Yin Cui, Ilya Figotin, Anja Hauth, Sergey Ioffe, Xuhui Jia, Yeqing Li, Lu Jiang, Zu Kim, Dan Kondratyuk, Bill Mark, Arsha Nagrani, Caroline Pantofaru, Sushant Prakash, Cordelia Schmid, Bryan Seybold, Mojtaba Seyedhosseini, Amanda Sadler, Rif A. Saurous, Rachel Stigler, Paul Voigtlaender, Pingmei Xu, Chaochao Yan, Xuan Yang, and Yukun Zhu for the discussions, help, and suggestions that significantly contributed to this work. We are grateful to Jay Yagnik, Rahul Sukthankar, and Tomas Izo for their enthusiastic help for this venture. Lastly, we thank Tom Small, Jennifer J. Sun, Hao Zhou, Nitesh B. Gundavarapu, Luke Friedman, and Mikhail Sirotenko for the super assist with making this weblog submit.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    AI

    “Periodic table of machine learning” could fuel AI discovery | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    The Future

    Palestine’s growing tech industry has been literally blown apart by the war between Israel and Hamas

    Gaza, regardless of being considered one of the most economically challenged areas in the world,…

    Mobile

    Daaaang! The Galaxy Watch 5 Pro just scored a 38% discount at Amazon

    (*5*) If you are searching for a nice smartwatch deal, this is likely to be…

    Technology

    John Romero revives Doom with a new episode for the game’s 30th anniversary

    Highly anticipated: Romero’s newest launch, Sigil 2 is not only a nod to Doom’s 30th…

    Science

    NASA clears the air: No evidence that UFOs are aliens

    Enlarge / NASA’s UAP research staff and newly appointed director of UAP analysis characterize rising…

    Mobile

    Here are some of the AI features coming with the Galaxy S24 series!

    TL;DR A brand new leak has highlighted two AI features and the three different spotlight…

    Our Picks
    The Future

    How to Turn Off Meta AI on Facebook?

    Mobile

    OnePlus 12: Everything you need to know

    Gadgets

    The best early deals for Prime Day 2 in 2023: Robot vacs, AirPods, and more

    Categories
    • AI (1,482)
    • Crypto (1,744)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,853)
    • Technology (1,789)
    • The Future (1,635)
    Most Popular
    Technology

    How to Create a High-Tech Kitchen on a Budget

    Mobile

    What is it and how does it work?

    Technology

    If the rotating bezel is back, I’ll hit ‘check out’ on a Galaxy Watch 6

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.