This week marks the start of the premier annual Computer Vision and Pattern Recognition convention (CVPR 2023), held in-person in Vancouver, BC (with extra digital content material). As a pacesetter in laptop imaginative and prescient analysis and a Platinum Sponsor, Google Research can have a powerful presence throughout CVPR 2023 with 90 papers being offered at the principle convention and energetic involvement in over 40 convention workshops and tutorials.
If you’re attending CVPR this yr, please cease by our sales space to talk with our researchers who’re actively exploring the most recent methods for utility to varied areas of machine notion. Our researchers may even be accessible to speak about and demo a number of latest efforts, together with on-device ML purposes with MediaPipe, methods for differential privateness, neural radiance subject applied sciences and rather more.
You may study extra about our analysis being offered at CVPR 2023 within the checklist under (Google affiliations in daring).
AligNeRF: High-Fidelity Neural Radiance Fields through Alignment-Aware Training
Yifan Jiang*, Peter Hedman, Ben Mildenhall, Dejia Xu, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue*
MixFields: Few-Shot Example-Driven Facial Modeling
Kacper Kania, Stephan Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Tomasz Trzcinski, Julien Valentin, Marek Kowalski
Enhancing Deformable Local Features by Jointly Learning to Detect and Describe Keypoints
Guilherme Potje, Felipe Cadar, Andre Araujo, Renato Martins, Erickson Nascimento
How Can Objects Help Action Recognition?
Xingyi Zhou, Anurag Arnab, Chen Sun, Cordelia Schmid
Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur
Peng Dai, Yinda Zhang, Xin Yu, Xiaoyang Lyu, Xiaojuan Qi
IFSeg: Image-Free Semantic Segmentation through Vision-Language Model
Sukmin Yun, Seong Park, Paul Hongsuck Seo, Jinwoo Shin
Learning from Unique Perspectives: User-Aware Saliency Modeling (see weblog put up)
Shi Chen*, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai Kohlhoff, Junfeng He
MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Tianhong Li*, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, Dilip Krishnan
NeRF-Supervised Deep Stereo
Fabio Tosi, Alessio Tonioni, Daniele Gregorio, Matteo Poggi
Omnimatte3D: Associating Objects and their Effects in Unconstrained Monocular Video
Mohammed Suhail, Erika Lu, Zhengqi Li, Noah Snavely, Leon Sigal, Forrester Cole
OpenScene: 3D Scene Understanding with Open Vocabularies
Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser
PersonNeRF: Personalized Reconstruction from Photo Collections
Chung-Yi Weng, Pratul Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman
Prefix Conditioning Unifies Language and Label Supervision
Kuniaki Saito*, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister
Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning (see weblog put up)
AJ Piergiovanni, Weicheng Kuo, Anelia Angelova
Burstormer: Burst Image Restoration and Enhancement Transformer
Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Shahbaz Khan, Ming-Hsuan Yang
Decentralized Learning with Multi-Headed Distillation
Andrey Zhmoginov, Mark Sandler, Nolan Miller, Gus Kristiansen, Max Vladymyrov
GINA-3D: Learning to Generate Implicit Neural Assets within the Wild
Bokui Shen, Xinchen Yan, Charles R. Qi, Mahyar Najibi, Boyang Deng, Leonidas Guibas, Yin Zhou, Dragomir Anguelov
Grad-PU: Arbitrary-Scale Point Cloud Upsampling through Gradient Descent with Learned Distance Functions
Yun He, Danhang Tang, Yinda Zhang, Xiangyang Xue, Yanwei Fu
Hi-LASSIE: High-Fidelity Articulated Shape and Skeleton Discovery from Sparse Image Ensemble
Chun-Han Yao*, Wei-Chih Hung, Yuanzhen Li, Michael Rubinstein, Ming-Hsuan Yang, Varun Jampani
Hyperbolic Contrastive Learning for Visual Representations past Objects
Songwei Ge, Shlok Mishra, Simon Kornblith, Chun-Liang Li, David Jacobs
Imagic: Text-Based Real Image Editing with Diffusion Models
Bahjat Kawar*, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, Michal Irani
Incremental 3D Semantic Scene Graph Prediction from RGB Sequences
Shun-Cheng Wu, Keisuke Tateno, Nassir Navab, Federico Tombari
IPCC-TP: Utilizing Incremental Pearson Correlation Coefficient for Joint Multi-Agent Trajectory Prediction
Dekai Zhu, Guangyao Zhai, Yan Di, Fabian Manhardt, Hendrik Berkemeyer, Tuan Tran, Nassir Navab, Federico Tombari, Benjamin Busam
Learning to Generate Image Embeddings with User-Level Differential Privacy
Zheng Xu, Maxwell Collins, Yuxiao Wang, Liviu Panait, Sewoong Oh, Sean Augenstein, Ting Liu, Florian Schroff, H. Brendan McMahan
NoisyTwins: Class-Consistent and Diverse Image Generation Through StyleGANs
Harsh Rangwani, Lavish Bansal, Kartik Sharma, Tejan Karmali, Varun Jampani, Venkatesh Babu Radhakrishnan
NULL-Text Inversion for Editing Real Images Using Guided Diffusion Models
Ron Mokady*, Amir Hertz*, Kfir Aberman, Yael Pritch, Daniel Cohen-Or*
SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow
Itai Lang*, Dror Aiger, Forrester Cole, Shai Avidan, Michael Rubinstein
Shape, Pose, and Appearance from a Single Image through Bootstrapped Radiance Field Inversion
Dario Pavllo*, David Joseph Tan, Marie-Julie Rakotosaona, Federico Tombari
TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation
Hanzhi Chen, Fabian Manhardt, Nassir Navab, Benjamin Busam
AttemptOnDiffusion: A Tale of Two UNets
Luyang Zhu*, Dawei Yang, Tyler Zhu, Fitsum Reda, William Chan, Chitwan Saharia, Mohammad Norouzi, Ira Kemelmacher-Shlizerman
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning
Aishwarya Kamath*, Peter Anderson, Su Wang, Jing Yu Koh*, Alexander Ku, Austin Waters, Yinfei Yang*, Jason Baldridge, Zarana Parekh
CLIPPO: Image-and-Language Understanding from Pixels Only
Michael Tschannen, Basil Mustafa, Neil Houlsby
Controllable Light Diffusion for Portraits
David Futschik, Kelvin Ritland, James Vecore, Sean Fanello, Sergio Orts-Escolano, Brian Curless, Daniel Sýkora, Rohit Pandey
CUF: Continuous Upsampling Filters
Cristina Vasconcelos, Cengiz Oztireli, Mark Matthews, Milad Hashemi, Kevin Swersky, Andrea Tagliasacchi
Improving Zero-Shot Generalization and Robustness of Multi-modal Models
Yunhao Ge*, Jie Ren, Andrew Gallagher, Yuxiao Wang, Ming-Hsuan Yang, Hartwig Adam, Laurent Itti, Balaji Lakshminarayanan, Jiaping Zhao
LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding
Gen Li, Varun Jampani, Deqing Sun, Laura Sevilla-Lara
Nerflets: Local Radiance Fields for Efficient Structure-Aware 3D Scene Representation from 2D Supervision
Xiaoshuai Zhang, Abhijit Kundu, Thomas Funkhouser, Leonidas Guibas, Hao Su, Kyle Genova
Self-Supervised AutoFlow
Hsin-Ping Huang, Charles Herrmann, Junhwa Hur, Erika Lu, Kyle Sargent, Austin Stone, Ming-Hsuan Yang, Deqing Sun
Train-Once-for-All Personalization
Hong-You Chen*, Yandong Li, Yin Cui, Mingda Zhang, Wei-Lun Chao, Li Zhang
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning (see weblog put up)
Antoine Yang*, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid
VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining
Junjie Ke, Keren Ye, Jiahui Yu, Yonghui Wu, Peyman Milanfar, Feng Yang
You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model
Shengkun Tang, Yaqing Wang, Zhenglun Kong, Tianchi Zhang, Yao Li, Caiwen Ding, Yanzhi Wang, Yi Liang, Dongkuan Xu
Accidental Light Probes
Hong-Xing Yu, Samir Agarwala, Charles Herrmann, Richard Szeliski, Noah Snavely, Jiajun Wu, Deqing Sun
FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh
FlexiViT: One Model for All Patch Sizes
Lucas Beyer, Pavel Izmailov, Alexander Kolesnikov, Mathilde Caron, Simon Kornblith, Xiaohua Zhai, Matthias Minderer, Michael Tschannen, Ibrahim Alabdulmohsin, Filip Pavetic
Iterative Vision-and-Language Navigation
Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason Corso, Peter Anderson, Stefan Lee, Jesse Thomason
MoDi: Unconditional Motion Synthesis from Diverse Data
Sigal Raab, Inbal Leibovitch, Peizhuo Li, Kfir Aberman, Olga Sorkine-Hornung, Daniel Cohen-Or
Multimodal Prompting with Missing Modalities for Visual Recognition
Yi-Lun Lee, Yi-Hsuan Tsai, Wei-Chen Chiu, Chen-Yu Lee
Scene-Aware Egocentric 3D Human Pose Estimation
Jian Wang, Diogo Luvizon, Weipeng Xu, Lingjie Liu, Kripasindhu Sarkar, Christian Theobalt
ShapeClipper: Scalable 3D Shape Learning from Single-View Images through Geometric and CLIP-Based Consistency
Zixuan Huang, Varun Jampani, Ngoc Anh Thai, Yuanzhen Li, Stefan Stojanov, James M. Rehg
Improving Image Recognition by Retrieving from Web-Scale Image-Text Data
Ahmet Iscen, Alireza Fathi, Cordelia Schmid
JacobiNeRF: NeRF Shaping with Mutual Information Gradients
Xiaomeng Xu, Yanchao Yang, Kaichun Mo, Boxiao Pan, Li Yi, Leonidas Guibas
Learning Personalized High Quality Volumetric Head Avatars from Monocular RGB Videos
Ziqian Bai*, Feitong Tan, Zeng Huang, Kripasindhu Sarkar, Danhang Tang, Di Qiu, Abhimitra Meka, Ruofei Du, Mingsong Dou, Sergio Orts-Escolano, Rohit Pandey, Ping Tan, Thabo Beeler, Sean Fanello, Yinda Zhang
NeRF within the Palm of Your Hand: Corrective Augmentation for Robotics through Novel-View Synthesis
Allan Zhou, Mo Jin Kim, Lirui Wang, Pete Florence, Chelsea Finn
Pic2Word: Mapping Pictures to Words for Zero-Shot Composed Image Retrieval
Kuniaki Saito*, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister
SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates
Mikaela Uy, Ricardo Martin Brualla, Leonidas Guibas, Ke Li
Structured 3D Features for Reconstructing Controllable Avatars
Enric Corona, Mihai Zanfir, Thiemo Alldieck, Eduard Gabriel Bazavan, Andrei Zanfir, Cristian Sminchisescu
Token Turing Machines
Michael S. Ryoo, Keerthana Gopalakrishnan, Kumara Kahatapitiya, Ted Xiao, Kanishka Rao, Austin Stone, Yao Lu, Julian Ibarz, Anurag Arnab
TruFor: Leveraging All-Round Clues for Trustworthy Image Forgery Detection and Localization
Fabrizio Guillaro, Davide Cozzolino, Avneesh Sud, Nicholas Dufour, Luisa Verdoliva
Video Probabilistic Diffusion Models in Projected Latent Space
Sihyun Yu, Kihyuk Sohn, Subin Kim, Jinwoo Shin
Visual Prompt Tuning for Generative Transfer Learning
Kihyuk Sohn, Yuan Hao, Jose Lezama, Luisa Polania, Huiwen Chang, Han Zhang, Irfan Essa, Lu Jiang
Zero-Shot Referring Image Segmentation with Global-Local Context Features
Seonghoon Yu, Paul Hongsuck Seo, Jeany Son
AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot AV-ASR (see weblog put up)
Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid
DC2: Dual-Camera Defocus Control by Learning to Refocus
Hadi Alzayer, Abdullah Abuolaim, Leung Chun Chan, Yang Yang, Ying Chen Lou, Jia-Bin Huang, Abhishek Kar
Edges to Shapes to Concepts: Adversarial Augmentation for Robust Vision
Aditay Tripathi*, Rishubh Singh, Anirban Chakraborty, Pradeep Shenoy
MetaCLUE: Towards Comprehensive Visual Metaphors Research
Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas Guibas, William T. Freeman, Yuanzhen Li, Varun Jampani
Multi-Realism Image Compression with a Conditional Generator
Eirikur Agustsson, David Minnen, George Toderici, Fabian Mentzer
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors
Congyue Deng, Chiyu Jiang, Charles R. Qi, Xinchen Yan, Yin Zhou, Leonidas Guibas, Dragomir Anguelov
On Calibrating Semantic Segmentation Models: Analyses and an Algorithm
Dongdong Wang, Boqing Gong, Liqiang Wang
Persistent Nature: A Generative Model of Unbounded 3D Worlds
Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola, Noah Snavely
Rethinking Domain Generalization for Face Anti-spoofing: Separability and Alignment
Yiyou Sun*, Yaojie Liu, Xiaoming Liu, Yixuan Li, Wen-Sheng Chu
SINE: Semantic-Driven Image-Based NeRF Editing with Prior-Guided Editing Field
Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, Zhaopeng Cui
Sequential Training of GANs Against GAN-Classifiers Reveals Correlated “Knowledge Gaps” Present Among Independently Trained GAN Instances
Arkanath Pathak, Nicholas Dufour
SparsePose: Sparse-View Camera Pose Regression and Refinement
Samarth Sinha, Jason Zhang, Andrea Tagliasacchi, Igor Gilitschenski, David Lindell
Teacher-Generated Spatial-Attention Labels Boost Robustness and Accuracy of Contrastive Models
Yushi Yao, Chang Ye, Gamaleldin F. Elsayed, Junfeng He
Computer Vision for Mixed Reality
Speakers embody: Ira Kemelmacher-Shlizerman
Workshop on Autonomous Driving (WAD)
Speakers embody: Chelsea Finn
Multimodal Content Moderation (MMCM)
Organizers embody: Chris Bregler
Speakers embody: Mevan Babakar
Medical Computer Vision (MCV)
Speakers embody: Shekoofeh Azizi
VAND: Visual Anomaly and Novelty Detection
Speakers embody: Yedid Hoshen, Jie Ren
Structural and Compositional Learning on 3D Data
Organizers embody: Leonidas Guibas
Speakers embody: Andrea Tagliasacchi, Fei Xia, Amir Hertz
Fine-Grained Visual Categorization (FGVC10)
Organizers embody: Kimberly Wilber, Sara Beery
Panelists embody: Hartwig Adam
XRNeRF: Advances in NeRF for the Metaverse
Organizers embody: Jonathan T. Barron
Speakers embody: Ben Poole
OmniLabel: Infinite Label Spaces for Semantic Understanding through Natural Language
Organizers embody: Golnaz Ghiasi, Long Zhao
Speakers embody: Vittorio Ferrari
Large Scale Holistic Video Understanding
Organizers embody: David Ross
Speakers embody: Cordelia Schmid
New Frontiers for Zero-Shot Image Captioning Evaluation (NICE)
Speakers embody: Cordelia Schmid
Computational Cameras and Displays (CCD)
Organizers embody: Ulugbek Kamilov
Speakers embody: Mauricio Delbracio
Gaze Estimation and Prediction within the Wild (GAZE)
Organizers embody: Thabo Beele
Speakers embody: Erroll Wood
Face and Gesture Analysis for Health Informatics (FGAHI)
Speakers embody: Daniel McDuff
Computer Vision for Animal Behavior Tracking and Modeling (CV4Animals)
Organizers embody: Sara Beery
Speakers embody: Arsha Nagrani
3D Vision and Robotics
Speakers embody: Pete Florence
End-to-End Autonomous Driving: Perception, Prediction, Planning and Simulation (E2EAD)
Organizers embody: Anurag Arnab
End-to-End Autonomous Driving: Emerging Tasks and Challenges
Speakers embody: Sergey Levine
Multi-Modal Learning and Applications (MULA)
Speakers embody: Aleksander Hołyński
Synthetic Data for Autonomous Systems (SDAS)
Speakers embody: Lukas Hoyer
Vision Datasets Understanding
Organizers embody: José Lezama
Speakers embody: Vijay Janapa Reddi
Precognition: Seeing Through the Future
Organizers embody: Utsav Prabhu
New Trends in Image Restoration and Enhancement (NTIRE)
Organizers embody: Ming-Hsuan Yang
Generative Models for Computer Vision
Speakers embody: Ben Mildenhall, Andrea Tagliasacchi
Adversarial Machine Learning on Computer Vision: Art of Robustness
Organizers embody: Xinyun Chen
Speakers embody: Deqing Sun
Media Forensics
Speakers embody: Nicholas Carlini
Tracking and Its Many Guises: Tracking Any Object in Open-World
Organizers embody: Paul Voigtlaender
3D Scene Understanding for Vision, Graphics, and Robotics
Speakers embody: Andy Zeng
Computer Vision for Physiological Measurement (CVPM)
Organizers embody: Daniel McDuff
Affective Behaviour Analysis In-the-Wild
Organizers embody: Stefanos Zafeiriou
Ethical Considerations in Creative Applications of Computer Vision (EC3V)
Organizers embody: Rida Qadri, Mohammad Havaei, Fernando Diaz, Emily Denton, Sarah Laszlo, Negar Rostamzadeh, Pamela Peter-Agbia, Eva Kozanecka
VizWiz Grand Challenge: Describing Images and Videos Taken by Blind People
Speakers embody: Haoran Qi
Efficient Deep Learning for Computer Vision (see weblog put up)
Organizers embody: Andrew Howard, Chas Leichner
Speakers embody: Andrew Howard
Visual Copy Detection
Organizers embody: Priya Goyal
Learning 3D with Multi-View Supervision (3DMV)
Speakers embody: Ben Poole
Image Matching: Local Features and Beyond
Organizers embody: Eduard Trulls
Vision for All Seasons: Adverse Weather and Lightning Conditions (V4AS)
Organizers embody: Lukas Hoyer
Transformers for Vision (T4V)
Speakers embody: Cordelia Schmid, Huiwen Chang
Scholars vs Big Models — How Can Academics Adapt?
Organizers embody: Sara Beery
Speakers embody: Jonathan T. Barron, Cordelia Schmid
ScanNet Indoor Scene Understanding Challenge
Speakers embody: Tom Funkhouser
Computer Vision for Microscopy Image Analysis
Speakers embody: Po-Hsuan Cameron Chen
Embedded Vision
Speakers embody: Rahul Sukthankar
Sight and Sound
Organizers embody: Arsha Nagrani, William Freeman
AI for Content Creation
Organizers embody: Deqing Sun, Huiwen Chang, Lu Jiang
Speakers embody: Ben Mildenhall, Tim Salimans, Yuanzhen Li
Computer Vision within the Wild
Organizers embody: Xiuye Gu, Neil Houlsby
Speakers embody: Boqing Gong, Anelia Angelova
Visual Pre-Training for Robotics
Organizers embody: Mathilde Caron
Omnidirectional Computer Vision
Organizers embody: Yi-Hsuan Tsai