Close Menu
Ztoog
    What's Hot
    Science

    The starfish’s whole body is a head

    AI

    Charting New Frontiers: Stanford University’s Pioneering Study on Geographic Bias in AI

    Gadgets

    The best folding electric bikes for 2024

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

    Ztoog
    Home » A Scene understanding, Accessibility, Navigation, Pathfinding, & Obstacle avoidance dataset – Google Research Blog
    AI

    A Scene understanding, Accessibility, Navigation, Pathfinding, & Obstacle avoidance dataset – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    A Scene understanding, Accessibility, Navigation, Pathfinding, & Obstacle avoidance dataset – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Sagar M. Waghmare, Senior Software Engineer, and Kimberly Wilber, Software Engineer, Google Research, Perception Team

    As most individuals navigate their on a regular basis world, they course of visible enter from the surroundings utilizing an eye-level perspective. Unlike robots and self-driving vehicles, individuals have no “out-of-body” sensors to assist information them. Instead, an individual’s sensory enter is totally “selfish”, or “from the self.” This additionally applies to new applied sciences that perceive the world round us from a human-like perspective, e.g., robots navigating via unknown buildings, AR glasses that spotlight objects, or assistive expertise to assist individuals run independently.

    In pc imaginative and prescient, scene understanding is the subfield that research how seen objects relate to the scene’s 3D construction and format by specializing in the spatial, purposeful, and semantic relationships between objects and their surroundings. For instance, autonomous drivers should perceive the 3D construction of the street, sidewalks, and surrounding buildings whereas figuring out and recognizing avenue indicators and cease lights, a process made simpler with 3D knowledge from a particular laser scanner mounted on the highest of the automotive reasonably than 2D photos from the motive force’s perspective. Robots navigating a park should perceive the place the trail is and what obstacles may intervene, which is simplified with a map of their environment and GPS positioning knowledge. Finally, AR glasses that assist customers discover their means want to know the place the consumer is and what they’re .

    The pc imaginative and prescient group sometimes research scene understanding duties in contexts like self-driving, the place many different sensors (GPS, wheel positioning, maps, and so on.) past selfish imagery can be found. Yet most datasets on this house don’t focus completely on selfish knowledge, so they’re much less relevant to human-centered navigation duties. While there are many self-driving targeted scene understanding datasets, they’ve restricted generalization to selfish human scene understanding. A complete human selfish dataset would assist construct techniques for associated purposes and function a difficult benchmark for the scene understanding group.

    To that finish, we current the Scene understanding, Accessibility, Navigation, Pathfinding, Obstacle avoidance dataset, or SANPO (additionally the Japanese phrase for ”brisk stroll”), a multi-attribute video dataset for outside human selfish scene understanding. The dataset consists of actual world knowledge and artificial knowledge, which we name SANPO-Real and SANPO-Synthetic, respectively. It helps all kinds of dense prediction duties, is difficult for present fashions, and contains actual and artificial knowledge with depth maps and video panoptic masks by which every pixel is assigned a semantic class label (and for some semantic courses, every pixel can be assigned a semantic occasion ID that uniquely identifies that object within the scene). The actual dataset covers various environments and has movies from two stereo cameras to help multi-view strategies, together with 11.4 hours captured at 15 frames per second (FPS) with dense annotations. Researchers can obtain and use SANPO right here.

    3D scene of an actual session constructed utilizing the offered annotations (segmentation, depth and digital camera positions). The prime middle video exhibits the depth map, and the highest proper exhibits the RGB or semantic annotations.

    SANPO-Real

    SANPO-Real is a multiview video dataset containing 701 classes recorded with two stereo cameras: a head-mounted ZED Mini and a chest-mounted ZED-2i. That’s 4 RGB streams per session at 15 FPS. 597 classes are recorded at a decision of 2208×1242 pixels, and the rest are recorded at a decision of 1920×1080 pixels. Each session is roughly 30 seconds lengthy, and the recorded movies are rectified utilizing Zed software program and saved in a lossless format. Each session has high-level attribute annotations, digital camera pose trajectories, dense depth maps from CREStereo, and sparse depth maps offered by the Zed SDK. A subset of classes have temporally constant panoptic segmentation annotations of every occasion.

    The SANPO knowledge assortment system for accumulating real-world knowledge. Right: (i) a backpack with ZED 2i and ZED Mini cameras for knowledge assortment (backside), (ii) the within of the backpack exhibiting the ZED field and battery pack mounted on a 3D printed container (center), and (iii) an Android app exhibiting the reside feed from the ZED cameras (prime). Left: The chest-mounted ZED-2i has a stereo baseline of 12cm with a 2.1mm focal size, and the head-mounted ZED Mini has a baseline of 6.3cm with a 2.1mm focal size.

    Temporally constant panoptic segmentation annotation protocol

    SANPO contains thirty completely different class labels, together with varied surfaces (street, sidewalk, curb, and so on.), fences (guard rails, partitions,, gates), obstacles (poles, bike racks, timber), and creatures (pedestrians, riders, animals). Gathering high-quality annotations for these courses is a gigantic problem. To present temporally constant panoptic segmentation annotation we divide every video into 30-second sub-videos and annotate each fifth body (90 frames per sub-video), utilizing a cascaded annotation protocol. At every stage, we ask annotators to attract borders round 5 mutually unique labels at a time. We ship the identical picture to completely different annotators with as many levels because it takes to gather masks till all labels are assigned, with annotations from earlier subsets frozen and proven to the annotator. We use AOT, a machine studying mannequin that reduces annotation effort by giving annotators automated masks from which to start out, taken from earlier frames through the annotation course of. AOT additionally infers segmentation annotations for intermediate frames utilizing the manually annotated previous and following frames. Overall, this strategy reduces annotation time, improves boundary precision, and ensures temporally constant annotations for as much as 30 seconds.

    Temporally constant panoptic segmentation annotations. The segmentation masks’s title signifies whether or not it was manually annotated or AOT propagated.

    SANPO-Synthetic

    Real-world knowledge has imperfect floor fact labels as a consequence of {hardware}, algorithms, and human errors, whereas artificial knowledge has near-perfect floor fact and may be personalized. We partnered with Parallel Domain, an organization specializing in lifelike artificial knowledge technology, to create SANPO-Synthetic, a high-quality artificial dataset to complement SANPO-Real. Parallel Domain is expert at creating handcrafted artificial environments and knowledge for machine studying purposes. Thanks to their work, SANPO-Synthetic matches real-world seize circumstances with digital camera parameters, placement, and surroundings.

    3D scene of an artificial session constructed utilizing the offered annotations (segmentation, depth and odometry). The prime middle video exhibits the depth map, and the highest proper exhibits the RGB or semantic annotations.

    SANPO-Synthetic is a top quality video dataset, handcrafted to match actual world eventualities. It incorporates 1961 classes recorded utilizing virtualized Zed cameras, evenly break up between chest-mounted and head-mounted positions and calibrations. These movies are monocular, recorded from the left lens solely. These classes range in size and FPS (5, 14.28, and 33.33) for a mixture of temporal decision / size tradeoffs, and are saved in a lossless format. All the classes have exact digital camera pose trajectories, dense pixel correct depth maps and temporally constant panoptic segmentation masks.

    SANPO-Synthetic knowledge has pixel-perfect annotations, even for small and distant cases. This helps develop difficult datasets that mimic the complexity of real-world scenes. SANPO-Synthetic and SANPO-Real are additionally drop-in replacements for one another, so researchers can research area switch duties or use artificial knowledge throughout coaching with few domain-specific assumptions.

    An even sampling of actual and artificial scenes.

    Statistics

    Semantic courses

    We designed our SANPO taxonomy: i) with human selfish navigation in thoughts, ii) with the aim of being moderately straightforward to annotate, and iii) to be as shut as potential to the present segmentation taxonomies. Though constructed with human selfish navigation in thoughts, it may be simply mapped or prolonged to different human selfish scene understanding purposes. Both SANPO-Real and SANPO-Synthetic function all kinds of objects one would anticipate in selfish impediment detection knowledge, resembling roads, buildings, fences, and timber. SANPO-Synthetic features a broad distribution of hand-modeled objects, whereas SANPO-Real options extra “long-tailed” courses that seem occasionally in photos, resembling gates, bus stops, or animals.

    Distribution of photos throughout the courses within the SANPO taxonomy.

    Instance masks

    SANPO-Synthetic and a portion of SANPO-Real are additionally annotated with panoptic occasion masks, which assign every pixel to a category and occasion ID. Because it’s usually human-labeled, SANPO-Real has a lot of frames with usually lower than 20 cases per body. Similarly, SANPO-Synthetic’s digital surroundings affords pixel-accurate segmentation of most unusual objects within the scene. This signifies that artificial photos incessantly function many extra cases inside every body.

    When contemplating per-frame occasion counts, artificial knowledge incessantly options many extra cases per body than the labeled parts of SANPO-Real.

    Comparison to different datasets

    We evaluate SANPO to different essential video datasets on this discipline, together with SCAND, MuSoHu, Ego4D, VIPSeg, and Waymo Open. Some of those are meant for robotic navigation (SCAND) or autonomous driving (Waymo) duties. Across these datasets, solely Waymo Open and SANPO have each panoptic segmentations and depth maps, and solely SANPO has each actual and artificial knowledge.

    Comparison to different video datasets. For stereo vs mono video, datasets marked with ★ have stereo video for all scenes and people marked ☆ present stereo video for a subset. For depth maps, ★ signifies dense depth whereas ☆ represents sparse depth, e.g., from a lower-resolution LIDAR scanner.

    Conclusion and future work

    We current SANPO, a large-scale and difficult video dataset for human selfish scene understanding, which incorporates actual and artificial samples with dense prediction annotations. We hope SANPO will assist researchers construct visible navigation techniques for the visually impaired and advance visible scene understanding. Additional particulars can be found within the preprint and on the SANPO dataset GitHub repository.

    Acknowledgements

    This dataset was the result of arduous work of many people from varied groups inside Google and our exterior associate, Parallel Domain.

    Core Team: Mikhail Sirotenko, Dave Hawkey, Sagar Waghmare, Kimberly Wilber, Xuan Yang, Matthew Wilson

    Parallel Domain: Stuart Park, Alan Doucet, Alex Valence-Lanoue, & Lars Pandikow.

    We would additionally prefer to thank following staff members: Hartwig Adam, Huisheng Wang, Lucian Ionita, Nitesh Bharadwaj, Suqi Liu, Stephanie Debats, Cattalyya Nuengsigkapian, Astuti Sharma, Alina Kuznetsova, Stefano Pellegrini, Yiwen Luo, Lily Pagan, Maxine Deines, Alex Siegman, Maura O’Brien, Rachel Stigler, Bobby Tran, Supinder Tohra, Umesh Vashisht, Sudhindra Kopalle, Reet Bhatia.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Ethereum Retests Breakout Zone, Analyst Sets $3,500 Target

    An analyst has defined how Ethereum is retesting a breakout zone at present and that…

    Gadgets

    Huawei Unveils New MateBook D 16, Blending Style And Technology

    At Huawei’s “Creation of Beauty” launch occasion in Dubai on December 12, the corporate launched…

    Crypto

    BlackRock Increases Bitcoin Holdings, ETF Now At $1.6 Billion

    In a latest and pivotal transfer throughout the cryptocurrency realm, BlackRock’s iShares Bitcoin Trust has…

    Science

    It’s summer and that means disturbing swim advisories. Here’s our top 5

    Enlarge / A 2-year-old enjoys the spray of water in a splash pad in Los…

    Gadgets

    Renewable Energy Overtakes Coal In US Power Generation

    Wind and solar energy have surpassed coal in electrical energy era for the primary time…

    Our Picks
    Gadgets

    Google Search starts rolling out ChatGPT-style generative AI results

    AI

    Teaching language models to reason algorithmically – Google Research Blog

    The Future

    New Shudder Horror About Social Media

    Categories
    • AI (1,560)
    • Crypto (1,827)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Crypto

    SEC and CFTC open doors for spot crypto trading on US-registered exchanges

    The Future

    Recreate Snow White’s Life, Death, and Resurrection With This Lego Set

    AI

    A visual language model for UI and visually-situated language understanding – Google Research Blog

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.