Welcome to 3DRealCar

The first large-scale 3D real car dataset containing 2500 cars captured in real-world scenes. We hope 3DRealCar can serve as a valuable resource to promote car-related tasks.

Read More

Video

About

3D cars are commonly used in self-driving systems, virtual/augmented reality, and games. However, existing 3D car datasets are either synthetic or low-quality, presenting a significant gap toward the high-quality real-world 3D car datasets and limiting their applications in practical scenarios.


In this paper, we propose the first large-scale 3D real car dataset, termed 3DRealCar, offering three distinctive features. (1) High-Volume: 2,500 cars are meticulously scanned by 3D scanners, obtaining car images and point clouds with real-world dimensions; (2) High-Quality: Each car is captured in an average of 200 dense, high-resolution 360-degree RGB-D views, enabling high-fidelity 3D reconstruction; (3) High-Diversity: The dataset contains various cars from over 100 brands, collected under three distinct lighting conditions, including reflective, standard, and dark. Additionally, we offer detailed car parsing maps for each instance to promote research in car parsing tasks.


Moreover, we remove background point clouds and standardize the car orientation to a unified axis for the reconstruction only on cars without background and controllable rendering. We benchmark 3D reconstruction results with state-of-the-art methods across each lighting condition in 3DRealCar. Extensive experiments demonstrate that the standard lighting condition part of 3DRealCar can be used to produce a large number of high-quality 3D cars, improving various 2D and 3D tasks related to cars. Notably, our dataset brings insight into the fact that recent 3D reconstruction methods face challenges in reconstructing high-quality 3D cars under reflective and dark lighting conditions.

Distributions

Our dataset mainly contains six different car types. We also count the number of different lighting conditions on cars. the standard condition means the car is well-lighting and without strong specular highlights. The reflective condition means the car has specular highlights. Glossy materials bring huge challenges to recent 3D reconstruction methods. The dark condition means the car is captured in an underground parking and not well-lighting. The number of captured images per car is an average of 200. The number of views ranges from 50 to 400. Our dataset contains more than twenty colors, but the white and black colors still take up most of our dataset. In addition, we also show the distribution of car size, in terms of their length, width, and height.

Supported Tasks

Since our dataset provides RGB-D images, point clouds, car parsing maps, and detailed annotations, we can conduct various 2D and 3D tasks in it. Specifically, we provide car parsing maps, indicating our dataset can be used for car detection, segmentation, and parsing tasks. Moreover, our captured RGB-D images support the depth estimation task. Since we collect various car types with diverse appearances, researchers can use our dataset for domain transfer learning with different car types. As for 3D tasks, our captured dense views and point clouds can be employed for 3D reconstruction, 3D generation, novel view synthesis, vehicular point cloud completion, and vehicular point cloud parsing. With reconstructed 3D cars, we can use them to simulate corner-case scenarios for training a robust self-driving perception system.

3D Car Parsing

Our dataset is the first to provide 3D car parsing annotations for parsing car components in 3D space.

Thanks to that we provide 2D car parsing maps for every instance in our 3DRealCar dataset, we can lift 2D parsing maps to 3D and segment each component for point clouds and meshes. The primary purpose of these 3D car parsing maps is to enable precise and comprehensive analysis of vehicle structures, which is crucial for applications such as autonomous driving, vehicle design, vehicle editing, and virtual reality simulations. By using these detailed 3D parsing maps, developers and researchers can improve object recognition algorithms and enhance collision detection systems. Furthermore, this dataset facilitates the training of machine learning models to better understand the spatial relationships and physical attributes of car components, leading to more advanced and reliable automotive technologies.

Reconstructed Results

We show visualization in our dataset with recent state-of-the-art 3D reconstruction method, 3DGS (Gaussian Splatting). To the standard lighting condition, 3DGS is capable of reconstructing relatively high-quality 3D cars from our dataset. Note that this level of reconstructing quality is enough to be utilized and rendered for downstream tasks. However, the reflective and dark conditions results are not promising. Therefore, these two parts of our 3DRealCar bring two challenges to recent 3D methods.


The first challenge is the reconstruction of specular highlights. Due to the particular property of cars, materials of car surfaces are generally glossy, which means it would produce plenty of specular highlights if cars are exposed to the sun or strong light.


The second challenge is the reconstruction in a dark environment. The training images captured in the dark environment lost plenty of details for reconstruction. Therefore, how to achieve high-quality reconstruction results from these two extremely lighting conditions is a challenge to recent methods.


We hope these results can encourage subsequent research for the 3D reconstruction in awful conditions.

Standard
Reflective
Dark