Benchmark Semantic Segmentation of High-Resolution 3D Point Clouds and Meshes
Automated extraction of geographic objects from airborne data is an important research topic in photogrammetry and remote sensing since decades. In addition to images, 3D point clouds from airborne LiDAR and Multi-View-Stereo-Image-Matching became more and more important as basic data source. The aim of H3D is to provide state-of-the-art data sets to the community, which can be used by interested researchers to test own methods and algorithms on semantic segmentation for geospatial applications. We propose a benchmark consisting of highly dense LiDAR point clouds captured at four different epochs. The respective point clouds are manually labeled into 11 classes and are used to derive labeled textured 3D meshes as an alternative representation. Core features of H3D are:
- UAV-based simultaneous data collection of both LiDAR data and imagery from the same platform
- High density LiDAR data of 800 points/m² enriched by RGB colors of on board cameras incorporating a GSD of 2-3 cm
→ H3D(PC) - High resolution 3D textured mesh data generated from both LiDAR data and imagery in an hybrid manner
→ H3D(Mesh) - Manually set labels for the LiDAR point cloud, which are automatically transferred to the 3D mesh
- Multi-temporal data set available for 4 different epochs. While 3 of them (March 2018, November 2018 and March 2019) were captured over the same area with the same high-resolution sensor configuration, H3D also includes a LiDAR only epoch (March 2016) captured from a manned aircraft with typical characteristics of national mapping agency LiDAR data.
For more information, click here.