Overview of our proposed procedural generation pipeline. We first acquire and annotate 3D model cars, and then we apply procedural generation to manipulate the texture and shape of the car to generate synthetic damage types. Subsequently, we place a camera into the scene and assign a scene environment and car color. Finally, we render a 2D image paired with perfect ground-truth pixel annotations for parts and damages. Given a set of 3D model cars with part annotations, this process is fully automatic, allowing us to render an arbitrarily large amount of data.
@InProceedings{parslov_2024_WACV,
author = {Parslov, Jens and Riise, Erik and Papadopoulos, Dim P.},
title = {CrashCar101: Procedural Generation for Damage Assessment},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2024},
pages = {}
}