Two years in the past we introduced Project Guideline, a collaboration between Google Research and Guiding Eyes for the Blind that enabled folks with visible impairments (e.g., blindness and low-vision) to stroll, jog, and run independently. Using solely a Google Pixel telephone and headphones, Project Guideline leverages on-device machine studying (ML) to navigate customers alongside outside paths marked with a painted line. The technology has been examined everywhere in the world and even demonstrated in the course of the opening ceremony on the Tokyo 2020 Paralympic Games.
Since the unique announcement, we got down to enhance Project Guideline by embedding new options, similar to impediment detection and superior path planning, to soundly and reliably navigate customers by way of extra advanced eventualities (similar to sharp turns and close by pedestrians). The early model featured a easy frame-by-frame picture segmentation that detected the place of the trail line relative to the picture body. This was adequate for orienting the consumer to the road, however supplied restricted details about the encircling surroundings. Improving the navigation alerts, similar to alerts for obstacles and upcoming turns, required a significantly better understanding and mapping of the customers’ surroundings. To resolve these challenges, we constructed a platform that may be utilized for a wide range of spatially-aware purposes within the accessibility area and past.
Today, we announce the open supply launch of Project Guideline, making it accessible for anybody to make use of to enhance upon and construct new accessibility experiences. The launch contains supply code for the core platform, an Android utility, pre-trained ML fashions, and a 3D simulation framework.
System design
The major use-case is an Android utility, nevertheless we wished to have the ability to run, check, and debug the core logic in a wide range of environments in a reproducible method. This led us to design and construct the system utilizing C++ for shut integration with MediaPipe and different core libraries, whereas nonetheless having the ability to combine with Android utilizing the Android NDK.
Under the hood, Project Guideline makes use of ARCore to estimate the place and orientation of the consumer as they navigate the course. A segmentation mannequin, constructed on the DeepLabV3+ framework, processes every digital camera body to generate a binary masks of the rule (see the earlier weblog submit for extra particulars). Points on the segmented guideline are then projected from image-space coordinates onto a world-space floor aircraft utilizing the digital camera pose and lens parameters (intrinsics) supplied by ARCore. Since every body contributes a distinct view of the road, the world-space factors are aggregated over a number of frames to construct a digital mapping of the real-world guideline. The system performs piecewise curve approximation of the rule world-space coordinates to construct a spatio-temporally constant trajectory. This permits refinement of the estimated line because the consumer progresses alongside the trail.
Project Guideline builds a 2D map of the rule, aggregating detected factors in every body (purple) to construct a stateful illustration (blue) because the runner progresses alongside the trail. |
A management system dynamically selects a goal level on the road far forward primarily based on the consumer’s present place, velocity, and course. An audio suggestions sign is then given to the consumer to regulate their heading to coincide with the upcoming line section. By utilizing the runner’s velocity vector as a substitute of digital camera orientation to compute the navigation sign, we remove noise attributable to irregular digital camera actions frequent throughout operating. We may even navigate the consumer again to the road whereas it’s out of digital camera view, for instance if the consumer overshot a flip. This is feasible as a result of ARCore continues to trace the pose of the digital camera, which will be in comparison with the stateful line map inferred from earlier digital camera photographs.
Project Guideline additionally contains impediment detection and avoidance options. An ML mannequin is used to estimate depth from single photographs. To prepare this monocular depth mannequin, we used SANPO, a big dataset of out of doors imagery from city, park, and suburban environments that was curated in-house. The mannequin is able to detecting the depth of assorted obstacles, together with folks, autos, posts, and extra. The depth maps are transformed into 3D level clouds, just like the road segmentation course of, and used to detect the presence of obstacles alongside the consumer’s path after which alert the consumer by way of an audio sign.
Using a monocular depth ML mannequin, Project Guideline constructs a 3D level cloud of the surroundings to detect and alert the consumer of potential obstacles alongside the trail. |
A low-latency audio system primarily based on the AAudio API was carried out to supply the navigational sounds and cues to the consumer. Several sound packs can be found in Project Guideline, together with a spatial sound implementation utilizing the Resonance Audio API. The sound packs had been developed by a workforce of sound researchers and engineers at Google who designed and examined many various sound fashions. The sounds use a mixture of panning, pitch, and spatialization to information the consumer alongside the road. For instance, a consumer veering to the fitting could hear a beeping sound within the left ear to point the road is to the left, with growing frequency for a bigger course correction. If the consumer veers additional, a high-pitched warning sound could also be heard to point the sting of the trail is approaching. In addition, a transparent “stop” audio cue is at all times accessible within the occasion the consumer veers too removed from the road, an anomaly is detected, or the system fails to supply a navigational sign.
Project Guideline has been constructed particularly for Google Pixel telephones with the Google Tensor chip. The Google Tensor chip allows the optimized ML fashions to run on-device with increased efficiency and decrease energy consumption. This is important for offering real-time navigation directions to the consumer with minimal delay. On a Pixel 8 there’s a 28x latency enchancment when operating the depth mannequin on the Tensor Processing Unit (TPU) as a substitute of CPU, and 9x enchancment in comparison with GPU.
Testing and simulation
Project Guideline features a simulator that permits speedy testing and prototyping of the system in a digital surroundings. Everything from the ML fashions to the audio suggestions system runs natively throughout the simulator, giving the total Project Guideline expertise while not having all of the {hardware} and bodily surroundings arrange.
Screenshot of Project Guideline simulator. |
Future course
To launch the technology ahead, WearWorks has grow to be an early adopter and teamed up with Project Guideline to combine their patented haptic navigation expertise, using haptic suggestions along with sound to information runners. WearWorks has been growing haptics for over 8 years, and beforehand empowered the primary blind marathon runner to finish the NYC Marathon with out sighted help. We hope that integrations like these will result in new improvements and make the world a extra accessible place.
The Project Guideline workforce can be working in direction of eradicating the painted line fully, utilizing the most recent developments in cellular ML technology, such because the ARCore Scene Semantics API, which might establish sidewalks, buildings, and different objects in outside scenes. We invite the accessibility group to construct upon and enhance this technology whereas exploring new use instances in different fields.
Acknowledgements
Many folks had been concerned within the improvement of Project Guideline and the applied sciences behind it. We’d wish to thank Project Guideline workforce members: Dror Avalon, Phil Bayer, Ryan Burke, Lori Dooley, Song Chun Fan, Matt Hall, Amélie Jean-aimée, Dave Hawkey, Amit Pitaru, Alvin Shi, Mikhail Sirotenko, Sagar Waghmare, John Watkinson, Kimberly Wilber, Matthew Willson, Xuan Yang, Mark Zarich, Steven Clark, Jim Coursey, Josh Ellis, Tom Hoddes, Dick Lyon, Chris Mitchell, Satoru Arao, Yoojin Chung, Joe Fry, Kazuto Furuichi, Ikumi Kobayashi, Kathy Maruyama, Minh Nguyen, Alto Okamura, Yosuke Suzuki, and Bryan Tanaka. Thanks to ARCore contributors: Ryan DuToit, Abhishek Kar, and Eric Turner. Thanks to Alec Go, Jing Li, Liviu Panait, Stefano Pellegrini, Abdullah Rashwan, Lu Wang, Qifei Wang, and Fan Yang for offering ML platform help. We’d additionally wish to thank Hartwig Adam, Tomas Izo, Rahul Sukthankar, Blaise Aguera y Arcas, and Huisheng Wang for their management help. Special due to our companions Guiding Eyes for the Blind and Achilles International.