Researchers
Project Summary
- The project focused on exploring implicit neural representation for 3D lidar point clouds processing, namely ground surface extraction and completion.
- In particular, we explored automatic surface extraction and densification from 3D lidar point clouds, e.g. for ground surface reconstruction or semantic surfaces extraction.
- We developed different workflows to explore various aspects of surface extraction and completion from 3D lidar point clouds.
- The project is still ongoing and results will be published in the future.
Development Tools
- We designed and used deep neural networks (implicit neural representations) adapted to the tasks, using Tensorflow and PyTorch on a GPU
- To conduct the experiments, we used different datasets: lidar point clouds from the University of Rennes, and DALES, an open-access lidar dataset used to benchmark point cloud processing approaches.
- Several public GitHub repositories were used: fourier-feature-networks, pointnet, and inr4torch
- We also used CloudCompare to manipulate and visualize the 3D point clouds
Development Outputs
- The project methods and results are not yet published and thus not publicly accessible.
- Upon publication, the methods will be published on GitHub and described in scientific publications
Project Description
In natural environments, the availability of lidar data opens opportunities to improve ecological assessments that appear essential to climate change mitigation (biomass estimations, carbon storage assessments, natural flow attenuation,…).
To fulfill these objectives, the precise geometrical information contained in lidar point clouds often needs to be processed to distinguish between the ground and other objects that cover it, such as vegetation or infrastructure. Misclassification of the points and the lack of backscatter in densely vegetated areas constitute an obstacle to the automatic generation of fine-resolution digital terrain models, as low vegetation points may be mistaken for ground points, requiring manual correction and interpolation in the missed areas.
Figure 1: The problem of missing data under vegetation.
In bathymetric environments, the lack of ability to identify or detect the water bottom in turbid or deep areas greatly limits the application possibilities of bathymetric and topo-bathymetric lidar sensors. Full-waveform reanalysis allows the densification and completion of lidar point clouds in such cases. However, these data are massive and complex to handle and do not exploit the geometrical context provided by the point clouds.
Figure 2: Reaching deeper seabed using full-waveform bathymetric lidar data.
Being able to derive more information about the ground and water bottom from existing surveys would be major progress for the automatic generation of digital terrain models and bathymetric models.
With this project, we developed new methods based on deep neural networks to extract and complete surfaces from lidar point clouds. We explored automatic below-vegetation ground completion and seabed/riverbed/lake bed completion in natural areas, but also the extraction of multiple surfaces representing different semantic realities in urban areas too. The objective was to assess the potential of improving ground extraction and completion with deep neural networks and to evaluate what other surfaces can be automatically derived from point clouds.
We conducted experiments in various settings:
- Airborne topo-bathymetric lidar point clouds of riverine environments, where both the ground below riverine vegetation and the riverbed need to be completed
Figure 3: Typical profile of a topo-bathymetric lidar point cloud acquired over a river (data source: Lidar platform, University of Rennes, OSUR).
- Airborne topographic lidar point clouds of mountainous areas, where the challenge is to handle complex topography with varying slopes and vegetation
Figure 4: Example of lidar point cloud of a mountainous area, colored by elevation (data source: OpenGF dataset).
- Airborne topographic lidar point clouds of (sub-)urban areas, where buildings, vegetation, power lines or poles can be challenging to handle.
Figure 5: Example of urban area surveyed with airborne lidar (data source: DALES dataset).
To experiment around surface extraction and completion, we used data acquired by the Lidar Platform of the University of Rennes (not open-access for now), but also open-source datasets. Among them, the DALES dataset - benchmarked for 3D point cloud (semantic) segmentation - and the OpenGF dataset - benchmarked for ground filtering.
We then developed adapted deep neural networks that take as input the raw lidar point cloud (as an array of X,Y,Z positions, or with an additional point label depending on the experiment) and outputs the desired surface(s). Specific loss functions were created to optimize the generation of the surfaces and adapt it to the specificities of the application and of the lidar data (multiple returns, multiple surface cover layers recorded, etc).
Preliminary results suggest that our approach could be useful to:
- Derive surface cover surfaces (a continuous version of the Digital Surface Model)
- Densify existing point cloud semantic segmentations and generate continuous surfaces for each class
- Derive and complete ground surfaces (generalizability and full quality assessments still need to be completed)
- Derive and complete multiple surfaces at once corresponding to the ground, the riverbed, and the canopy (full quantitative results still need to be completed)
The project is still ongoing and final methods and results are not yet available.
In the future, a potential further research direction could be to incorporate the lidar waveforms - when available - as an input. This was envisioned originally for this collaboration, but initial findings using the 3D points only encouraged us to further explore this direction first.