Close Menu
Ztoog
    What's Hot
    Gadgets

    This rare Sonos sale cuts up to 25% off speakers

    Science

    First working graphene semiconductor could lead to faster computers

    Crypto

    Bitcoin ETFs Set For Big Boost From Unexpected Source

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

      LiberNovo Omni: The World’s First Dynamic Ergonomic Chair

    • Technology

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

      5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

      Senate advances GENIUS Act after cloture vote passes

    Ztoog
    Home » Accelerating machine learning prototyping with interactive tools – Ztoog
    AI

    Accelerating machine learning prototyping with interactive tools – Ztoog

    Facebook Twitter Pinterest WhatsApp
    Accelerating machine learning prototyping with interactive tools – Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Ruofei Du, Interactive Perception & Graphics Lead, Google Augmented Reality, and Na Li, Tech Lead Manager, Google CoreML

    Update — 2023/05/08: This publish has been up to date to incorporate open-source particulars for the Visual Blocks framework.

    Recent deep learning advances have enabled a plethora of high-performance, real-time multimedia functions primarily based on machine learning (ML), equivalent to human physique segmentation for video and teleconferencing, depth estimation for 3D reconstruction, hand and physique monitoring for interplay, and audio processing for distant communication.

    However, creating and iterating on these ML-based multimedia prototypes will be difficult and dear. It often entails a cross-functional group of ML practitioners who fine-tune the fashions, consider robustness, characterize strengths and weaknesses, examine efficiency within the end-use context, and develop the functions. Moreover, fashions are regularly up to date and require repeated integration efforts earlier than analysis can happen, which makes the workflow ill-suited to design and experiment.

    In “Rapsai: Accelerating Machine Learning Prototyping of Multimedia Applications through Visual Programming”, introduced at CHI 2023, we describe a visible programming platform for fast and iterative growth of end-to-end ML-based multimedia functions. Visual Blocks for ML, previously known as Rapsai, supplies a no-code graph constructing expertise via its node-graph editor. Users can create and join totally different parts (nodes) to quickly construct an ML pipeline, and see the ends in real-time with out writing any code. We display how this platform allows a greater mannequin analysis expertise via interactive characterization and visualization of ML mannequin efficiency and interactive knowledge augmentation and comparability. We have launched the Visual Blocks for ML framework, alongside with a demo and Colab examples. Try it out your self right now.

    Visual Blocks makes use of a node-graph editor that facilitates fast prototyping of ML-based multimedia functions.

    Formative research: Design targets for fast ML prototyping

    To higher perceive the challenges of current fast prototyping ML options (LIME, VAC-CNN, EnsembleMatrix), we carried out a formative research (i.e., the method of gathering suggestions from potential customers early within the design means of a know-how product or system) utilizing a conceptual mock-up interface. Study members included seven laptop imaginative and prescient researchers, audio ML researchers, and engineers throughout three ML groups.

    The formative research used a conceptual mock-up interface to assemble early insights.

    Through this formative research, we recognized six challenges generally present in current prototyping options:

    1. The enter used to guage fashions sometimes differs from in-the-wild enter with precise customers when it comes to decision, side ratio, or sampling charge.
    2. Participants couldn’t shortly and interactively alter the enter knowledge or tune the mannequin.
    3. Researchers optimize the mannequin with quantitative metrics on a hard and fast set of knowledge, however real-world efficiency requires human reviewers to guage within the software context.
    4. It is troublesome to check variations of the mannequin, and cumbersome to share one of the best model with different group members to attempt it.
    5. Once the mannequin is chosen, it may be time-consuming for a group to make a bespoke prototype that showcases the mannequin.
    6. Ultimately, the mannequin is simply half of a bigger real-time pipeline, during which members need to look at intermediate outcomes to know the bottleneck.

    These recognized challenges knowledgeable the event of the Visual Blocks system, which included six design targets: (1) develop a visible programming platform for quickly constructing ML prototypes, (2) help real-time multimedia person enter in-the-wild, (3) present interactive knowledge augmentation, (4) evaluate mannequin outputs with side-by-side outcomes, (5) share visualizations with minimal effort, and (6) present off-the-shelf fashions and datasets.

    Node-graph editor for visually programming ML pipelines

    Visual Blocks is principally written in JavaScript and leverages TensorFlow.js and TensorFlow Lite for ML capabilities and three.js for graphics rendering. The interface allows customers to quickly construct and work together with ML fashions utilizing three coordinated views: (1) a Nodes Library that incorporates over 30 nodes (e.g., Image Processing, Body Segmentation, Image Comparison) and a search bar for filtering, (2) a Node-graph Editor that permits customers to construct and modify a multimedia pipeline by dragging and including nodes from the Nodes Library, and (3) a Preview Panel that visualizes the pipeline’s enter and output, alters the enter and intermediate outcomes, and visually compares totally different fashions.

    The visible programming interface permits customers to shortly develop and consider ML fashions by composing and previewing node-graphs with real-time outcomes.

    Iterative design, growth, and analysis of distinctive fast prototyping capabilities

    Over the final 12 months, we’ve been iteratively designing and enhancing the Visual Blocks platform. Weekly suggestions periods with the three ML groups from the formative research confirmed appreciation for the platform’s distinctive capabilities and its potential to speed up ML prototyping via:

    • Support for varied forms of enter knowledge (picture, video, audio) and output modalities (graphics, sound).
    • A library of pre-trained ML fashions for frequent duties (physique segmentation, landmark detection, portrait depth estimation) and customized mannequin import choices.
    • Interactive knowledge augmentation and manipulation with drag-and-drop operations and parameter sliders.
    • Side-by-side comparability of a number of fashions and inspection of their outputs at totally different phases of the pipeline.
    • Quick publishing and sharing of multimedia pipelines on to the net.

    Evaluation: Four case research

    To consider the usability and effectiveness of Visual Blocks, we carried out 4 case research with 15 ML practitioners. They used the platform to prototype totally different multimedia functions: portrait depth with relighting results, scene depth with visible results, alpha matting for digital conferences, and audio denoising for communication.

    The system streamlining comparability of two Portrait Depth fashions, together with custom-made visualization and results.

    With a brief introduction and video tutorial, members have been in a position to shortly determine variations between the fashions and choose a greater mannequin for his or her use case. We discovered that Visual Blocks helped facilitate fast and deeper understanding of mannequin advantages and trade-offs:

    “It gives me intuition about which data augmentation operations that my model is more sensitive [to], then I can go back to my training pipeline, maybe increase the amount of data augmentation for those specific steps that are making my model more sensitive.” (Participant 13)

    “It’s a fair amount of work to add some background noise, I have a script, but then every time I have to find that script and modify it. I’ve always done this in a one-off way. It’s simple but also very time consuming. This is very convenient.” (Participant 15)

    The system permits researchers to check a number of Portrait Depth fashions at totally different noise ranges, serving to ML practitioners determine the strengths and weaknesses of every.

    In a post-hoc survey utilizing a seven-point Likert scale, members reported Visual Blocks to be extra clear about the way it arrives at its last outcomes than Colab (Visual Blocks 6.13 ± 0.88 vs. Colab 5.0 ± 0.88, < .005) and extra collaborative with customers to return up with the outputs (Visual Blocks 5.73 ± 1.23 vs. Colab 4.15 ± 1.43, < .005). Although Colab assisted customers in pondering via the duty and controlling the pipeline extra successfully via programming, Users reported that they have been in a position to full duties in Visual Blocks in only a few minutes that would usually take as much as an hour or extra. For instance, after watching a 4-minute tutorial video, all members have been in a position to construct a customized pipeline in Visual Blocks from scratch inside quarter-hour (10.72 ± 2.14). Participants often spent lower than 5 minutes (3.98 ± 1.95) getting the preliminary outcomes, then have been attempting out totally different enter and output for the pipeline.

    User scores between Rapsai (preliminary prototype of Visual Blocks) and Colab throughout 5 dimensions.

    More ends in our paper confirmed that Visual Blocks helped members speed up their workflow, make extra knowledgeable selections about mannequin choice and tuning, analyze strengths and weaknesses of various fashions, and holistically consider mannequin habits with real-world enter.

    Conclusions and future instructions

    Visual Blocks lowers growth limitations for ML-based multimedia functions. It empowers customers to experiment with out worrying about coding or technical particulars. It additionally facilitates collaboration between designers and builders by offering a standard language for describing ML pipelines. In the longer term, we plan to open this framework up for the neighborhood to contribute their very own nodes and combine it into many alternative platforms. We anticipate visible programming for machine learning to be a standard interface throughout ML tooling going ahead.

    Acknowledgements

    This work is a collaboration throughout a number of groups at Google. Key contributors to the venture embrace Ruofei Du, Na Li, Jing Jin, Michelle Carney, Xiuxiu Yuan, Kristen Wright, Mark Sherwood, Jason Mayes, Lin Chen, Jun Jiang, Scott Miles, Maria Kleiner, Yinda Zhang, Anuva Kulkarni, Xingyu “Bruce” Liu, Ahmed Sabie, Sergio Escolano, Abhishek Kar, Ping Yu, Ram Iyengar, Adarsh Kowdle, and Alex Olwal.

    We want to prolong our due to Jun Zhang, Satya Amarapalli and Sarah Heimlich for a couple of early-stage prototypes, Sean Fanello, Danhang Tang, Stephanie Debats, Walter Korman, Anne Menini, Joe Moran, Eric Turner, and Shahram Izadi for offering preliminary suggestions for the manuscript and the weblog publish. We would additionally wish to thank our CHI 2023 reviewers for his or her insightful suggestions.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    AI

    Top 50+ AI Coding Assistant Tools in 2023

    ChatGPT is able to writing code with out counting on present code references. Furthermore, it…

    AI

    OpenAI has released its first research into how using ChatGPT affects people’s emotional wellbeing

    The researchers discovered some intriguing variations between how women and men reply to using ChatGPT.…

    Gadgets

    Right to repair’s unlikely new adversary: Scientologists

    The right-to-repair motion has had its share of adversaries. From Big Tech to politicians and…

    Crypto

    Ethereum Co-Founder 22K ETH Transfer Sparks Price Speculation

    In a current improvement, Ethereum [ETH] co-founder Jeffrey Wilcke’s pockets has made a notable deposit…

    AI

    DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

    I’ve at all times been serious about energy, politics, and so forth. You know, human…

    Our Picks
    Mobile

    Fetch the coolest clamshell Motorola Razr+ for an impressive discount

    Technology

    Oregon’s opioid crisis: Why the state is going to recriminalize all drugs, including psychedelics like LSD, MDMA, and ketamine

    Crypto

    Doge Founder Calls Out Senator Warren For Anti-Crypto Stance, Elon Musk Agrees

    Categories
    • AI (1,493)
    • Crypto (1,753)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,802)
    • The Future (1,648)
    Most Popular
    Crypto

    Crypto investors are now optimistic after six quarters of declines

    Science

    There’s a New Theory About Where Dark Matter Is Hiding

    The Future

    Aquaman 2 Trailer Release Date Plus First Footage from DC Film

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.