In conjunction with CVPR 2019
June 17, 2019
Long Beach Convention Center, Long Beach, CA
In computer vision, scene understanding and modeling encapsulate a diverse set of research problems, ranging from low-level geometric modeling (e.g., SLAM algorithms) to 3D room layout estimation. These tasks are often addressed separately, yielding only a constrained understanding and representation of the underlying scene. In parallel, the popularity of 360° cameras has encouraged the digitization of the real world into augmented and virtual realities, enabling new applications such as virtual social interactions and semantically leveraged augmented reality. This workshop aims to promote comprehensive 3D scene understanding and modeling algorithms that create integrated scene representations (with geometry, appearance, semantics, and perceptual qualities), while utilizing 360° imagery to encourage research on its unique challenges.
The 2019 SUMO Challenge Workshop will bring together computer vision researchers working on 3D scene understanding and modeling for a day of keynote speakers, oral presentations, posters, and panel discussions on the topic. The two primary goals of the workshop are:
Encourage the development of comprehensive 3D scene understanding and modeling algorithms that address the aforementioned problems in a single framework.
Foster research on the unique challenges of generating comprehensive digital representations from 360° imagery.
The SUMO Challenge, in conjunction with the workshop, provides a dataset and an evaluation platform to assess and compare such scene understanding approaches that generate complete 3D representations with textured 3D models, pose, and semantics. The datasets created and released for this competition may serve as reference benchmarks for future research in 3D scene understanding.
The workshop is soliciting papers covering various problems related to 3D scene understanding and modeling from RGB and RGB-D imagery. The topics mainly focus on indoor scene modeling and include, but are not limited to:
360° data processing and scene understanding
“Stuff” detection and modeling
Object completion and 3D reconstruction
Object pose estimation
Articulated object modeling
Texture and appearance modeling
Material property estimation
Submissions must be written in English and must be sent in PDF format. Each submitted paper must be no longer than four (4) pages, excluding references. Please refer to the CVPR author submission guidelines for instructions regarding formatting, templates, and policies. The review process will be double blind, in that the authors will not know the names of the reviewers, and the reviewers will not know the names of the authors.
Submit your paper using the CMT web site before the April 26th deadline.
The schedule will be posted once it is finalized.