Close Menu
Ztoog
    What's Hot
    The Future

    China’s video-game limits haven’t cut heavy gaming

    AI

    ChatGPT with Eyes and Ears: BuboGPT is an AI Approach That Enables Visual Grounding in Multi-Modal LLMs

    The Future

    TikTok Is Allegedly Working on a New Photo App and Its Icon Looks Awfully Familiar

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

      India-Pak conflict: Pak appoints ISI chief, appointment comes in backdrop of the Pahalgam attack

    • Technology

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

      The more Google kills Fitbit, the more I want a Fitbit Sense 3

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

      ‘Dark photon’ theory of light aims to tear up a century of physics

    • AI

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

      The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    • Crypto

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

      Speak at Ztoog Disrupt 2025: Applications now open

    Ztoog
    Home » Modular visual question answering via code generation – Google Research Blog
    AI

    Modular visual question answering via code generation – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Modular visual question answering via code generation – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Sanjay Subramanian, PhD scholar, UC Berkeley, and Arsha Nagrani, Research Scientist, Google Research, Perception Team

    Visual question answering (VQA) is a machine studying activity that requires a mannequin to reply a question about a picture or a set of pictures. Conventional VQA approaches want a considerable amount of labeled coaching knowledge consisting of 1000’s of human-annotated question-answer pairs related to pictures. In latest years, advances in large-scale pre-training have led to the event of VQA strategies that carry out nicely with fewer than fifty coaching examples (few-shot) and with none human-annotated VQA coaching knowledge (zero-shot). However, there’s nonetheless a major efficiency hole between these strategies and state-of-the-art totally supervised VQA strategies, comparable to MaMMUT and VinVL. In explicit, few-shot strategies battle with spatial reasoning, counting, and multi-hop reasoning. Furthermore, few-shot strategies have usually been restricted to answering questions on single pictures.

    To enhance accuracy on VQA examples that contain advanced reasoning, in “Modular Visual Question Answering via Code Generation,” to seem at ACL 2023, we introduce CodeVQA, a framework that solutions visual questions utilizing program synthesis. Specifically, when given a question about a picture or set of pictures, CodeVQA generates a Python program (code) with easy visual features that enable it to course of pictures, and executes this program to find out the reply. We show that within the few-shot setting, CodeVQA outperforms prior work by roughly 3% on the COVR dataset and a pair of% on the GQA dataset.

    CodeVQA

    The CodeVQA method makes use of a code-writing giant language mannequin (LLM), comparable to PALM, to generate Python applications (code). We information the LLM to appropriately use visual features by crafting a immediate consisting of an outline of those features and fewer than fifteen “in-context” examples of visual questions paired with the related Python code for them. To choose these examples, we compute embeddings for the enter question and of all the questions for which we have now annotated applications (a randomly chosen set of fifty). Then, we choose questions which have the very best similarity to the enter and use them as in-context examples. Given the immediate and question that we wish to reply, the LLM generates a Python program representing that question.

    We instantiate the CodeVQA framework utilizing three visual features: (1) question, (2) get_pos, and (3) find_matching_image.

    • Query, which solutions a question a few single picture, is carried out utilizing the few-shot Plug-and-Play VQA (PnP-VQA) technique. PnP-VQA generates captions utilizing BLIP — an image-captioning transformer pre-trained on thousands and thousands of image-caption pairs — and feeds these right into a LLM that outputs the solutions to the question.
    • Get_pos, which is an object localizer that takes an outline of an object as enter and returns its place within the picture, is carried out utilizing GradCAM. Specifically, the outline and the picture are handed via the BLIP joint text-image encoder, which predicts an image-text matching rating. GradCAM takes the gradient of this rating with respect to the picture options to seek out the area most related to the textual content.
    • Find_matching_image, which is utilized in multi-image questions to seek out the picture that finest matches a given enter phrase, is carried out by utilizing BLIP textual content and picture encoders to compute a textual content embedding for the phrase and a picture embedding for every picture. Then the dot merchandise of the textual content embedding with every picture embedding characterize the relevance of every picture to the phrase, and we choose the picture that maximizes this relevance.

    The three features might be carried out utilizing fashions that require little or no annotation (e.g., textual content and image-text pairs collected from the net and a small variety of VQA examples). Furthermore, the CodeVQA framework might be simply generalized past these features to others {that a} consumer may implement (e.g., object detection, picture segmentation, or data base retrieval).

    Illustration of the CodeVQA technique. First, a big language mannequin generates a Python program (code), which invokes visual features that characterize the question. In this instance, a easy VQA technique (question) is used to reply one a part of the question, and an object localizer (get_pos) is used to seek out the positions of the objects talked about. Then this system produces a solution to the unique question by combining the outputs of those features.

    Results

    The CodeVQA framework appropriately generates and executes Python applications not just for single-image questions, but in addition for multi-image questions. For instance, if given two pictures, every displaying two pandas, a question one may ask is, “Is it true that there are four pandas?” In this case, the LLM converts the counting question concerning the pair of pictures right into a program through which an object rely is obtained for every picture (utilizing the question operate). Then the counts for each pictures are added to compute a complete rely, which is then in comparison with the quantity within the unique question to yield a sure or no reply.

    We consider CodeVQA on three visual reasoning datasets: GQA (single-image), COVR (multi-image), and NLVR2 (multi-image). For GQA, we offer 12 in-context examples to every technique, and for COVR and NLVR2, we offer six in-context examples to every technique. The desk beneath reveals that CodeVQA improves persistently over the baseline few-shot VQA technique on all three datasets.

    Method       GQA       COVR       NLVR2      
    Few-shot PnP-VQA       46.56       49.06       63.37      
    CodeVQA       49.03       54.11       64.04      

    Results on the GQA, COVR, and NLVR2 datasets, displaying that CodeVQA persistently improves over few-shot PnP-VQA. The metric is exact-match accuracy, i.e., the share of examples through which the anticipated reply precisely matches the ground-truth reply.

    We discover that in GQA, CodeVQA’s accuracy is roughly 30% increased than the baseline on spatial reasoning questions, 4% increased on “and” questions, and three% increased on “or” questions. The third class contains multi-hop questions comparable to “Are there salt shakers or skateboards in the picture?”, for which the generated program is proven beneath.

    img = open_image("Image13.jpg")
    salt_shakers_exist = question(img, "Are there any salt shakers?")
    skateboards_exist = question(img, "Are there any skateboards?")
    if salt_shakers_exist == "sure" or skateboards_exist == "sure":
        reply = "sure"
    else:
        reply = "no"
    

    In COVR, we discover that CodeVQA’s acquire over the baseline is increased when the variety of enter pictures is bigger, as proven within the desk beneath. This pattern signifies that breaking the issue down into single-image questions is useful.

             Number of pictures      
    Method    1    2    3    4    5   
    Few-shot PnP-VQA     91.7    51.5    48.3    47.0    46.9   
    CodeVQA    75.0    53.3    48.7    53.2    53.4   

    Conclusion

    We current CodeVQA, a framework for few-shot visual question answering that depends on code generation to carry out multi-step visual reasoning. Exciting instructions for future work embrace increasing the set of modules used and creating an analogous framework for visual duties past VQA. We observe that care ought to be taken when contemplating whether or not to deploy a system comparable to CodeVQA, since vision-language fashions like those utilized in our visual features have been proven to exhibit social biases. At the identical time, in comparison with monolithic fashions, CodeVQA provides further interpretability (via the Python program) and controllability (by modifying the prompts or visual features), that are helpful in manufacturing methods.

    Acknowledgements

    This analysis was a collaboration between UC Berkeley’s Artificial Intelligence Research lab (BAIR) and Google Research, and was carried out by Sanjay Subramanian, Medhini Narasimhan, Kushal Khangaonkar, Kevin Yang, Arsha Nagrani, Cordelia Schmid, Andy Zeng, Trevor Darrell, and Dan Klein.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    AI

    “Periodic table of machine learning” could fuel AI discovery | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Science

    In the 1960s, swindlers pushed fake radioactive medicine

    What would you consider a tool that promised to treatment most cancers, soothe arthritis, and…

    Technology

    The One UI 7 beta could be right around the corner for your Samsung phone

    (*7*)Robert Triggs / Android AuthorityTL;DR The Samsung One UI 7 beta program could kick off…

    Science

    Does String Theory Actually Describe the World? AI May Be Able to Tell

    A bunch led by string idea veterans Burt Ovrut of the University of Pennsylvania and…

    Gadgets

    Get this electrothermal shoulder massager for only $59.99

    We might earn income from the merchandise out there on this web page and take…

    Gadgets

    CES 2024 in Photos: The Year AI Ate Vegas

    The frenzied and intoxicating showcase for shopper know-how referred to as CES came about this…

    Our Picks
    Crypto

    Holy Satoshi! El Salvador Graduates Its 1st Batch Of Bitcoin Students

    Science

    How hummingbirds switch gears at breakneck speeds

    Gadgets

    Save more than 50% on hard drives, SSDs, and memory cards for Prime Day

    Categories
    • AI (1,482)
    • Crypto (1,744)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,853)
    • Technology (1,789)
    • The Future (1,635)
    Most Popular
    Technology

    AI is making Meta's apps basically unusable, as the company adds Meta AI everywhere and viral user-generated AI images proliferate on Facebook and Instagram (Scott Nover/Fast Company)

    Technology

    The Asian penalty in college admissions is still here — even without affirmative action, per new study

    Science

    The Foods the World Will Lose to Climate Change

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.