February 4, 2026 14:00 CEST
Live on Microsoft Teams.
On 4 February at 14:00 CEST, the ESA Φ-Lab Collaborative Innovation Network will host a new Φ-talk. Details are below.
Meet the speaker
Prof. Dr. Damian Borth is director of the Institute of Computer Science at the University of St.Gallen, where he holds a full professorship in Artificial Intelligence and Machine Learning (AIML) and currently visiting professor at ESA’s Φ-Lab in Frascati, Italy. Previously, Damian was the founding director of the Deep Learning Competence Center at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern, where he was also PI of the NVIDIA AI Lab at the DFKI. He‘s research focuses on representation learning of neural network’s weight spaces and multispectral imagery. His work has been awarded with the ACM SIGMM Test of Time Award 2023, Google Research Scholar Award 2022, the NVIDIA AI Lab at GTC 2016, the Best Paper Award at ACM ICMR 2012, and the McKinsey Business Technology Award in 2011. Damian did his postdoctoral research at UC Berkeley and the International Computer Science Institute (ICSI) in Berkeley, where he was involved in big data projects at the Lawrence Livermore National Laboratory. He received his PhD from the University of Kaiserslautern and the German Research Center for Artificial Intelligence (DFKI). During that time, Damian stayed as a visiting researcher at the Digital Video and Multimedia Lab at Columbia University, New York City, USA.
Talk abstract
Recent advances in remote sensing have led to an increase in the number of available foundation models; each trained on different modalities, datasets, and objectives, yet capturing only part of the vast geospatial knowledge landscape. While these models show strong results within their respective domains, their capabilities remain complementary rather than unified. Therefore, instead of choosing one model over another, we aim to combine their strengths into a single shared representation. We introduce GeoSANE, a geospatial model foundry that learns a unified neural representation from the weights of existing foundation models and task-specific models, able to generate novel neural networks weights on-demand. Given a target architecture, GeoSANE generates weights ready for finetuning for classification, segmentation, and detection tasks across multiple modalities. Models generated by GeoSANE consistently outperform their counterparts trained from scratch, match or surpass state-of-the-art remote sensing foundation models, and outperform models obtained through pruning or knowledge distillation when generating lightweight networks. Evaluations across ten diverse datasets and on GEO-Bench confirm its strong generalization capabilities. By shifting from pre-training to weight generation, GeoSANE introduces a new framework for unifying and transferring geospatial knowledge across models and tasks.
Register here!