March 4, 2026 14:00 CEST
Live on Microsoft Teams.
On 4 March at 14:00 CEST, the ESA Φ-Lab Collaborative Innovation Network will host a new Φ-talk. Details are below.
Meet the speaker
Arthur Ouaknine is a postdoctoral research fellow at McGill University and Mila – Quebec AI Institute, where he works on multimodal and multitask deep learning for remote sensing and biodiversity monitoring. He holds the IVADO Postdoc Entrepreneur Fellowship, which supports his work in forest monitoring. He received his PhD in computer science from the Institut Polytechnique de Paris. He previously co-founded and served as CTO of Rubisco AI, a startup focused on monitoring forest restoration projects. He was also a core team member of Climate Change AI, where he co-organised workshops at ICLR 2024 and NeurIPS 2024.
Talk abstract
Forest ecosystems are biodiversity hotspots and critical carbon sinks that require continuous monitoring to assess ecological health and mitigate environmental threats. It exposes key open challenges for machine learning in remote sensing: success requires generalisation across sensor modalities, spatial resolutions, and tasks under strong distribution shifts. In this talk, we present recent work that frames forest monitoring as a practical testbed for generalisation evaluation, representation learning, and transfer across sensors and scales.
We first focus on satellite remote sensing by introducing a large-scale benchmark that aggregates datasets into a unified evaluation framework spanning diverse sensors, resolutions, and downstream tasks. The benchmark includes a new dataset for species distribution estimation, along with a global training approach serving as a baseline. We then present a case study showing how self-supervised learning adapted to hyperspectral data improves vegetation trait prediction, and why a modality-driven approach matters when generalisation is evaluated under realistic sensor domain shifts.
We then continue with drone remote sensing, notably cm-level imagery, where individual-tree characterisation is becoming feasible at scale thanks to the growing availability of drone data. We present a generalist individual tree crown detector aiming for robust transfer across sites and acquisition conditions, and a method that leverages vision foundation models for tree crown segmentation and tree species classification.
We conclude with ML-driven future directions: building foundation models tailored to drone imagery, developing methods to address severe label scarcity for tree species, and exploring cross-scale transfer.
Register here!