Localizing 3D Cuboids in Single-view ImagesGiven a single-view input image, our goal is to detect the 2D corner locations of the cuboids depicted in the image. With the output part locations we can subsequently recover information about the camera and 3D shape via camera resectioning. AbstractIn this paper we seek to detect rectangular cuboids and localize their corners in uncalibrated single-view images depicting everyday scenes. In contrast to recent approaches that rely on detecting vanishing points of the scene and grouping line segments to form cuboids, we build a discriminative parts-based detector that models the appearance of the cuboid corners and internal edges while enforcing consistency to a 3D cuboid model. Our model copes with different 3D viewpoints and aspect ratios and is able to detect cuboids across many different object categories. We introduce a database of images with cuboid annotations that spans a variety of indoor and outdoor scenes and show qualitative and quantitative results on our collected database. Our model out-performs baseline detectors that use 2D constraints alone on the task of localizing cuboid corners. Paper
SUN Primitive DatabaseThe dataset contains four primitive shapes annotation for RGB images, as well as cuboid annotation for RGB-D images.
Source CodeSource code is available on GitHub: https://github.com/brussell123/SUNprimitive Usage:
The zip file is a snapshot of the latest source code on github. FilesAcknowledgmentsJianxiong Xiao was supported by Google U.S./Canada Ph.D. Fellowship in Computer Vision. Bryan Russell was funded by the Intel Science and Technology Center for Pervasive Computing (ISTC-PC). This work was funded by ONR MURI N000141010933 and NSF Career Award No. 0747120 to Antonio Torralba. |