The 2019 SUMO Workshop
360° Indoor Scene Understanding and Modeling

In conjunction with CVPR 2019
June 17, 2019
Long Beach Convention Center, Long Beach, CA


Overview

In computer vision, scene understanding and modeling encapsulate a diverse set of research problems, ranging from low-level geometric modeling (e.g., SLAM algorithms) to 3D room layout estimation. These tasks are often addressed separately, yielding only a constrained understanding and representation of the underlying scene. In parallel, the popularity of 360° cameras has encouraged the digitization of the real world into augmented and virtual realities, enabling new applications such as virtual social interactions and semantically leveraged augmented reality. This workshop aims to promote comprehensive 3D scene understanding and modeling algorithms that create integrated scene representations (with geometry, appearance, semantics, and perceptual qualities), while utilizing 360° imagery to encourage research on its unique challenges.

The 2019 SUMO Challenge Workshop will bring together computer vision researchers working on 3D scene understanding and modeling for a day of keynote speakers, oral presentations, posters, and panel discussions on the topic. The two primary goals of the workshop are:


Encourage the development of comprehensive 3D scene understanding and modeling algorithms that address the aforementioned problems in a single framework.

Foster research on the unique challenges of generating comprehensive digital representations from 360° imagery.

Call for Papers

The workshop is soliciting papers covering various problems related to 3D scene understanding and modeling from RGB and RGB-D imagery. The topics mainly focus on indoor scene modeling and include, but are not limited to:

360° data processing and scene understanding

Object detection

Object localization

Layout estimation

“Stuff” detection and modeling

Instance segmentation

Object completion and 3D reconstruction

Object pose estimation

Generative models

Articulated object modeling

Texture and appearance modeling

Material property estimation

Lighting recognition

Submissions

Submissions must be written in English and must be sent in PDF format. Each submitted paper must be no longer than four (4) pages, excluding references. Please refer to the CVPR author submission guidelines for instructions regarding formatting, templates, and policies. The review process will be double blind, in that the authors will not know the names of the reviewers, and the reviewers will not know the names of the authors.

Submit your paper using the CMT web site before the May 3rd deadline.

Timeline

SUMO Workshop Announced

Feb 5, 2019

Paper Submission Deadline

May 3, 2019

Notification to Authors

May 10, 2019

Camera Ready Paper Due

May 17, 2019

2019 SUMO Workshop at CVPR

June 17th, 2019

Keynote Speakers

Angel Chang

Simon Fraser University

Sanja Fidler

University of Toronto

Kristen Grauman

University of Texas

Angjoo Kanazawa

University of California, Berkeley

Jitendra Malik

University of California, Berkeley

Matthias Niessner

Technical University Munich

Schedule

Start Time End Time Description
9:00 9:10 Opening remarks
9:10 9:40 Keynote: Semantic 3D Understanding of Indoor Environments
Matthias Niessner
9:40 9:50 Oral 1: Multi-layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction
Daeyun Shin, Zhile Ren, Erik B. Sudderth, and Charless Fowlkes
9:50 10:00 Oral 2: Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction
Chao Wang and Xiaohu Guo
10:00 10:45 Poster session and coffee break
10:45 11:15 Keynote: Kristen Grauman
11:15 11:25 Oral 3: Convolutions on Spherical Images
Marc C. Eder and Jan-Michael Frahm
11:25 11:35 Oral 4: Kernel Transformer Networks for Compact Spherical Convolution
Yu-Chuan Su and Kristen Grauman
11:35 12:05 Keynote: Angjoo Kanazawa
12:05 1:30 Lunch
1:30 2:00 Keynote: Sanja Fidler
2:00 2:10 Oral 5: Learning Single-View 3D Reconstruction with Limited Pose Supervision
Guandao Yang, Yin Cui, Serge Belongie, and Barath Hariharan
2:10 2:20 Oral 6: Multi-planar Monocular Reconstruction of Manhattan Indoor Scenes
Seongdo Kim and Roberto Manduchi
2:20 3:15 Panel discussion with the keynote speakers
3:15 4:00 Poster session and break
4:00 4:30 Keynote: Jitendra Malik
4:30 5:00 Keynote: Angel Chang
5:00 5:15 Closing remarks

Organizers

Daniel Huber

Facebook

Lyne Tchapmi

Stanford University

Frank Dellaert

Georgia Tech

Ilke Demir

DeepScale

Shuran Song

Columbia University

Rachel Luo

Stanford University

Program Committee


Iro Armeni

Angel Chang

Kevin Chen

Tom Funkhouser

Yasu Furukawa

Georgia Gkioxari

Or Litany

Richard Newcombe

Manolis Savva