The 2019 Scene Understanding and Modeling Challenge



360° RGB-D Input

360° RGB



360° Depth


Comprehensive 3D Scene Output

3D Texture + Pose


3D Semantic + Instance



The SUMO Challenge targets the development of algorithms for comprehensive understanding of 3D indoor scenes from 360° RGB-D panoramas. The target 3D models of indoor scenes include all visible layout elements and objects complete with pose, semantic information, and texture. Algorithms submitted are evaluated at 3 levels of complexity corresponding to 3 tracks of the challenge: oriented 3D bounding boxes, oriented 3D voxel grids, and oriented 3D meshes. SUMO Challenge results will be presented at the 2019 SUMO Challenge Workshop, at CVPR.


Dataset

The SUMO challenge dataset is derived from processing scenes from the SUNCG dataset to produce 360° RGB-D images represented as cubemaps and corresponding 3D mesh models of all visible scene elements. The mesh models are further processed into a bounding box and voxel-based representation. The dataset format is described in detail in the SUMO white paper.

59 K

Indoor Scenes

360°

View

2

Modalities

1024

Resolution

1024 X 1024 RGB images

1024 X 1024 Depth Maps

2D Semantic Information

3D Semantic Information

3D Object Pose

3D Element Texture

3D Bounding Boxes Scene Representation

3D Voxel Grid Scene Representation

3D Mesh Scene Representation

Performance Tracks


The SUMO Challenge is organized into three performance tracks based on the output representation of the scene. A scene is represented as a collection of elements, each of which models one object in the scene (e.g., a wall, the floor, or a chair). An element is represented in one of three increasingly descriptive representations: bounding box, voxel grid, or surface mesh. For each element in the scene, a submission contains the following outputs listed per track. To get started, download the toolbox and training data from the links below. Visit the SUMO360 API web site for documentation, example code, and additional help.




3D Bounding Box Track

3D Bounding Box

3D Object Pose

Semantic Category of Element



3D Voxel Grid Track

3D Bounding Box

3D Object Pose

Semantic Category of Element

Location and RGB Color of Occupied 3D Voxels



3D Mesh Track

3D Bounding Box

3D Object Pose

Semantic Category of Element

Element's textured mesh (in .glb format)

How to Participate

Step 1:
Download the sample data to get a quick look at the formats of the input and output for the challenge.
Step 2:
Download and install the SUMO360 toolbox -- a Python package for manipulating SUMO input and output formats and computing the evaluation metrics.
-
Step 3:
Sign up to download the training data. Warning -- it is very large (almost 2 TB).
Step 4:
Develop your algorithm. The SUMO360 help pages contain examples showing how to load input data and manage the output scene format. The additional tools below can help you analyze the results.
Step 5:
Submit your results. Run your algorithm on the SUMO challenge test scenes and submit your results to the EvalAI web site. Detailed instructions are below.

Helpful Tools

Project Viewer:
A simple viewer to visualize mesh track projects. To visualize the other tracks, use the ProjectConverter to transform your project to a mesh track project. The viewer is Unity-based and includes Windows, Linux, and OSX versions.
LICENSE: Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. This program is licensed for your use under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License.
GLTF Viewer:
The individual entities in SUMO projects are in GLB format (binary GLTF). This GLTF viewer can also visualize GLB files.

Submissions

Submit your SUMO Challenge results with just a few easy steps.

Step 1:
Download the quick eval and dev eval test data. Quick eval (2 scenes) is useful for quickly verifying that your algorithm is working properly. Dev eval is a larger (360 scenes) set that is comparable to the final contest test set. The contest eval test data will be released shortly before the end of the challenge.
Step 2:
Configure your algorithm to output scenes for the performance track of your choice (bounding boxes, voxels, or meshes). Run your algorithm on the selected test set to generate a project scene for each test scene. Each project scene should be in a separate directory with the scene_id as its directory name.
Step 3:
Compress the directory containing the output project scenes into a zip file and upload it to a publicly visible web location.
Step 4:
Create a json submission file with the following format:
{ "result": "[some-public-url]/[filename].zip" }
for example:
{ "result": "https://foobar.edu/sumo-submission.zip" }
Step 5:
Go to the EvalAI web site. First time only: Create a new account. Create a participant team by filling in the dialog box on the right. Once you have a participant team, it will show up in the list on the left. Optionally, you can add other members to the team by clicking on the person icon. Return to the submission page for the SUMO Challenge.
Step 6:
Select the appropriate phase based on your chosen performance track and test set. Select "upload file" and upload the json file you created above. Enter any of the other optional information you would like to include. Press "Submit." Be patient — depending on the track, it can take up to one hour to run the evaluation on the dev test set. Once the evaluation is complete, view the results on the leaderboard page. You must select the appropriate challenge phase to see the results.

Metrics


Evaluation of a 3D scene focuses on 4 keys aspects: Geometry, Appearance, Semantic, and Perceptual (GASP).

Details of the metrics for each track are provided in the SUMO white paper.

Prizes


Winners of the 2019 SUMO Challenge will be announced at the CVPR SUMO Challenge Workshop, which will be held Jun 16th or 17th. See the official SUMO Challenge Contest Rules.

3D Mesh Track
1st Prize


$2500 cash prize

Titan X GPU

Oral Presentation

3D Voxel Track
2nd Prize


$2000 cash prize

Titan X GPU

Oral Presentation

3D Bounding Box Track
3rd Prize


$1500 cash prize

Titan X GPU

Oral Presentation


Timeline

SUMO Challenge Launch and Data Release

Feb 5, 2019

Paper Submission Deadline

April 26, 2019

Notification to Authors

May 10, 2019

Camera Ready Paper Deadline and Final Challenge Submissions Due

May 17, 2019

2019 SUMO Challenge Workshop at CVPR

June 17, 2019

Organizers

Daniel Huber

Facebook

Lyne Tchapmi

Stanford University

Frank Dellaert

Georgia Tech

Ilke Demir

DeepScale

Shuran Song

Columbia University

Rachel Luo

Stanford University

Advisory Board

Tom Funkhouser

Princeton University

Leo Guibas

Stanford University

Jitendra Malik

UC Berkeley

Silvio Savarese

Stanford University

Facebook Team

Bahram Dahi

Frank Dellaert

Jay Huang

Daniel Huber

Nandita Nayak

John Princen

Ruben Sethi

Challenge Advisors

Iro Armeni

Angel Chang

Kevin Chen

Christopher Choy

JunYoung Gwak

Manolis Savva

Alexander (Sasha) Sax

Richard Skarbez

Amir R. Zamir

The 2019 SUMO Challenge is generously sponsored by Facebook AI Research.