
The recently released benchmarks for large-scale semantic 3D reconstruction further intensi fied the research on this topic. Even though this method offered a solution for 3D urban reconstruction at a large scale, small geometries could not be captured precisely. One of the first methods for urban scene reconstruction in LOD1 (model where buildings have flat roofs) has used a semi-global matching (SGM) technique to find correspondences in a stereo pair of epipolar images, followed by a joint classification using image radiometry coupled with estimated elevation information to retrieve 3D city models. This boosted the development of stereo reconstruction methods in the remote sensing community. While until recently the quality of satellite imagery coupled with existing methodologies did not allow to produce 3D city models at a high-spatial-resolution in an automatic way, very-high-resolution commercial satellites (Worldview, Pleiades) launched in the last decade acquire high-quality stereo images all over the Earth, with a spatial resolution of up to 30 cm/pixel. The pipeline has proven to be efficient for 3D building reconstruction, even if the close-to-nadir image is not available.Ī few recent years have witnessed an increasing interest in the topic of 3D reconstruction of urban scenes from stereo satellite images. The proposed chain uses U-net to extract contour polygons of buildings, and the combination of optimisation and computational geometry techniques to reconstruct a digital terrain model and a digital height model and to correctly estimate the position of building footprints. In this paper, we propose an operational pipeline for large-scale 3D reconstruction of buildings from stereo satellite images.


The quality of reconstruction results depends particularly on the quality of the available stereo pair. Automatic 3D reconstruction of urban scenes from stereo pairs of satellite images remains a popular yet challenging research topic, driven by numerous applications such as telecommunications and defence.
