Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Tutorials

The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.

Tutorial proposals are accepted until:

April 10, 2026


If you wish to propose a new Tutorial please kindly fill out and submit this Expression of Interest form.



Tutorial on
Living Earth: Novel and scalable approaches to mapping, monitoring and planning environments.


Instructor

Richard Lucas
Aberystwyth University
United Kingdom
 
Brief Bio
Professor Richard Lucas (Aberystwyth University’s Department of Geography and Earth Science) has over 35 years of experience in multi-scale temporal characterisation of primarily vegetated environments from Earth observation data in support of ecological, biogeographical, carbon cycle and climate science, with this obtained primarily through academic research/teaching and government-related positions in Australia and the UK He continues to lead the conceptual development and implementation of the globally applicable Living Earth approach for consistent characterisation, mapping and monitoring of environments from spaceborne data and has provided significant contributions to the generation of global products including forest extent (with the Japan Aerospace Exploration Agency (JAXA)), woody above ground biomass (with the European Space Agency’s (ESA) Climate Change initiative) and mangrove extent and change (the Global Mangrove Watch, with Wetlands International and The Nature Conservancy). At national to regional levels, he has developed innovative products on secondary forests vegetation structure and land cover (including in Amazonia, Africa, Australia, Wales), that have increased understanding of ecosystem states and dynamics and environmental change. He has contributed to policy and land management agendas and engaged in public understanding of Earth observation science and global environmental issues.
Abstract

Living Earth is a novel approach to characterizing, mapping, monitoring, and planning landscapes, including underwater environments. Living Earth is based on globally applicable but locally relevant taxonomies, namely the Food and Agriculture Organisation (FAO) Land Cover Classification System (LCCS) and a Global Change Taxonomy (GCT) that builds on the Driver-Pressure-State-Impact-Response (DPSIR) framework. Maps of cover (land and water) are constructed from Environmental Descriptors (EDs) with pre-defined units or categories that are retrieved or classified primarily from Earth observation (EO) data. Habitat maps can be translated directly from these cover categories and through reference to contextual information (e.g., elevation, soil acidity). Evidence for change across all environments is gathered through comparison of selected EDs between time-separated periods and targets the 77 impact categories of the GCT. Each is then linked to relevant driving pressures (144 in total) to facilitate routine and consistent mapping of up to 246 ‘impact (pressure)’ categories. Validation of cover (land and water), habitats and change impacts and pressures can be achieved by referencing records submitted in the field using the Earthtrack mobile application, which uses the same taxonomies. The Response term of DPSIR supports the planning of future landscapes, and Living Earth facilitates visualization of land covers (including underwater components) proposed at selected time points (e.g., 2030, 2050) through use of the FAO LCCS and the simultaneous and/or or sequential pressures needed to achieve goals and ambitions. Through reference to EDs and past landscapes and change, consideration is given to risks, values and the realism of futures. The tutorial introduces the concepts behind Living Earth through short presentations and showcases practical application in selected countries through case implemented within Jupyter notebooks.

Keywords

land cover, habitats, change, futures.

Aims and Learning Objectives

To introduce the concepts behind Living Earth and the practicalities of implementation. The learning objectives are to allow attendees to i) extract relevant environmental descriptors from Earth observation data, ii) use these to construct classifications of cover (land and water), iii) automatically map change impacts and driving pressures, and iv) provide validation measures based on field observations obtained using the Earthtrack mobile application. The final objective is to show how the taxonomies used to characterise, map and monitor past landscapes can be directed towards future planning.

Target Audience

Policy makers, managers of land and water environments, academics and educators, students.

Prerequisite Knowledge of Audience

None required although some familiarity with running Jupyter notebooks is desirable.

Detailed Outline

The tutorial will run for six hours in total (1 day).

Morning:
Introduction to Living Earth - mapping environmental states (30 minutes)

The remainder of the session will then be a combination of short presentations (15 mins) followed by practicals undertaken using Jupyter notebooks).

Analysis Ready Data (ARD). An introduction to commonly used satellite sensor data (e.g., from the Sentinels, Landsat) and the requirement for ARD.

Environmental Descriptors (EDs). An overview of those that can be retrieved or classified from Earth observation data and how these fit in with the Food and Agriculture Organsation (FAO) Land Cover Classification System (LCCS).

Classification and visualisation of covers (land and water). A step-by-step guide to constructing cover maps from EDs with continuous units or categories and providing consistent cartography.


Translation to habitats. A demonstration of how cover maps can be translated to habitat categories and the relevance of contextual information.

Afternoon
Introduction to Living Earth - mapping the past and planning the future.

Change impacts. An illustration of how evidence is gathered for impacts listed in the Global Change Taxonomy.,

Driving Pressures: Insights into how the pressures causing the impacts can be captured and integrated to provide 'impact (pressure)' categories.

Futures: An overview of how the taxonomies of cover and change can be exploited to co-design future landscapes and plan the steps needed for fulfilment.

The Earthtrack mobile application: A walk through of how the Earthtrack mobile application is structured, how data on cover, habitats, change and futures can be captured and how these can be accessed.





Secretariat Contacts
e-mail: gistam.secretariat@insticc.org

Tutorial on
3D Machine Learning for Spatial AI Systems: From Point Clouds to Intelligent Environments


Instructor

Florent Poux
University of Liège, 3D Geodata Academy
France
 
Brief Bio
Dr. Florent Poux is a spatial sciences researcher and educator whose career spans the complete pipeline from field operations to AI innovation. Beginning as a LiDAR field engineer and land surveyor, he progressed through academic positions (Ph.D. in Spatial Sciences, PostDoc in Computer Graphics, Professor in Spatial AI) before leading innovation initiatives as Director of Innovation for a FrenchTech 120 company and founding multiple ventures that secured significant funding. His research focuses on intelligent processing of 3D point cloud data, with publications spanning semantic segmentation, 3D deep learning architectures, and spatial AI systems. He is the author of "3D Data Science with Python" (O'Reilly Media, 2025), a comprehensive 687-page guide to point cloud processing and spatial intelligence workflows. Through the 3D Geodata Academy, Dr. Poux has trained over 2,000 professionals in 3D data science techniques, with students successfully applying these methods in construction, robotics, urban planning, and environmental monitoring sectors. His technical tutorials on Medium have reached millions of readers, establishing him as a leading voice in practical 3D machine learning education.
Abstract

The convergence of 3D sensing technologies and machine learning is reshaping how we capture, understand, and interact with spatial environments. From autonomous vehicles navigating complex urban landscapes to construction sites monitored through digital twins, the ability to process and interpret 3D point cloud data has become a critical competency for spatial intelligence systems.

This hands-on tutorial bridges the gap between traditional GIS workflows and modern 3D machine learning techniques. Participants will gain practical experience implementing production-ready pipelines that transform raw point cloud data into actionable spatial intelligence. Rather than focusing solely on theoretical foundations, this tutorial emphasizes the judgment and decision-making skills that distinguish expert practitioners—understanding when to apply classical algorithms versus deep learning approaches, how to structure data for robust model training, and strategies for deploying 3D AI systems in real-world environments.

The tutorial presents proven workflows for 3D semantic segmentation, object detection, graph cosntruction, and AI-assisted scene understanding.
Participants will work with real-world datasets spanning indoor environments, urban landscapes, and infrastructure inspection scenarios.

By the conclusion, attendees will possess both the technical skills to implement 3D ML systems and the professional judgment to architect solutions for their specific domain applications.


Keywords

3D Machine Learning, Spatial AI, Point Cloud Processing, Deep Learning for GIS, LiDAR Analytics, Semantic Segmentation, 3D Scene Understanding, Digital Twins, Python for Spatial Data

Aims and Learning Objectives

Primary Aim: Equip GIS researchers and practitioners with production-ready 3D machine learning skills, bridging the gap between traditional spatial analysis and modern AI-driven approaches for processing massive point cloud datasets.

Upon completion, participants will be able to:

1. Structure 3D point cloud data for machine learning: Transform heterogeneous spatial datasets (LiDAR, photogrammetry, depth sensors) into model-ready formats using NumPy-based workflows and Open3D pipelines.

2. Implement 3D semantic segmentation systems: Build and deploy systems for classifying spatial elements in urban, indoor, and infrastructure contexts.

3. Apply 3D object detection for spatial analysis: Detect and localize objects within point clouds using both classical algorithms (RANSAC, DBSCAN clustering) and learned approaches.

4. Design end-to-end Spatial AI pipelines: Architect complete workflows from data acquisition through inference, with strategies for handling real-world challenges including noise, occlusion, and scale variation.

5. Exercise professional judgment in algorithm selection: Critically evaluate when deep learning outperforms classical methods, understanding computational trade-offs and deployment constraints for production environments.


Target Audience

This tutorial is designed for professionals and researchers seeking to advance their capabilities in 3D spatial data analysis:

- GIS Researchers and Practitioners: Those working with LiDAR, photogrammetry, or other 3D acquisition methods who want to integrate machine learning into their analytical workflows.

- Remote Sensing Specialists: Professionals processing aerial and terrestrial point clouds for environmental monitoring, urban planning, or infrastructure assessment.

- Computer Vision Engineers: Developers building 3D perception systems for robotics, autonomous vehicles, or augmented reality applications.

- Data Scientists in Spatial Domains: Analysts from construction, architecture, forestry, or smart city sectors looking to apply AI to their 3D data assets.

- Graduate Students and Academic Researchers: Those investigating novel applications of deep learning for geospatial analysis and digital twin development.

Industry Relevance: Construction (BIM/digital twins), urban planning, autonomous systems, environmental monitoring, heritage preservation, and infrastructure inspection.


Prerequisite Knowledge of Audience

Required:
- Python programming proficiency: Comfortable with functions, loops, file I/O, and basic object-oriented concepts. Experience with NumPy array operations is essential.
- Fundamental understanding of 3D coordinate systems: Familiarity with XYZ coordinates, point cloud concepts, and basic spatial transformations.
- Basic machine learning concepts: Understanding of supervised vs. unsupervised learning, training/validation/test splits, and evaluation metrics.

Helpful but not required:
- Prior experience with deep learning frameworks (PyTorch)
- Familiarity with point cloud visualization tools (CloudCompare, Open3D)
- Background in GIS or remote sensing applications

Technical Setup: Participants should bring laptops with Python 3.9+ installed. A pre-configured environment (Docker container or conda environment) will be provided one week before the tutorial to ensure all dependencies are available.


Detailed Outline

WORKSHOP: City-Scale Spatial AI: From Aerial LiDAR to LLM-Driven Analytics

SESSION 1: Foundations of Aerial LiDAR for Cities (45 min)
Objective: Master the specific challenges of city-scale geodata and prepare pipelines for large-scale ingestion.

Aerial Data Representations (15 min):
The Aerial Perspective: Understanding scan patterns, flight lines, return numbers (vegetation penetration), and intensity (material reflectivity).
Geospatial Context: Handling Coordinate Reference Systems (CRS), large coordinates (UTM), and floating-point precision issues in ML.
Formats and Storage: Efficient handling of massive .laz files, COPC (Cloud Optimized Point Clouds), and tiling strategies for memory management.

City-Scale Preprocessing Pipeline (15 min):
Tiling and Windowing: Strategies for slicing cities into processable chunks without losing boundary context.
Digital Terrain Models (DTM): Algorithms for ground extraction (CSF/PMF) and height normalization (converting absolute Z to height-above-ground).
Feature Engineering: Computing large-scale descriptors (roughness, planarity, scattering) to distinguish buildings from high vegetation and noise.

Hands-on Exercise 1 (15 min):
Load a raw aerial LiDAR tile of an urban block.
Perform ground classification and height normalization.
Generate a "Feature Stack": Combine intensity, height, and local normal variance into a tensor ready for analysis.

SESSION 2: 3D Machine Learning (40 min)
Objective: Implement a dual-stream system combining unsupervised clustering and supervised classification for urban entities.

System A: Unsupervised Clustering (15 min):
Geometric Segmentation: Using Supervoxels and Region Growing to group points into meaningful clusters (e.g., individual roof planes, trees).
Density-Based Approaches: HDBSCAN for separating distinct urban objects in sparse data.
Why Unsupervised?: Generating "candidate object proposals" without expensive labeling constraints.

System B: Supervised Classification (10 min):
Scalable Architectures: Introduction to Sparse Convolutions or PointNet++ for processing city tiles efficiently.
Class Definition: Detecting standard classes (Building, Road, High Vegetation, Car, Water).
The Hybrid Approach: Using supervised predictions to label the unsupervised clusters (Vote-based labeling).

Hands-on Exercise 2 (15 min):
Step 1: Run a geometric clustering algorithm to isolate distinct "blobs" (candidate buildings/trees).
Step 2: Apply a pre-trained classifier to the points.
Step 3: Assign a dominant class to each cluster to create "Labeled Instances."

SESSION 3: From Geometry to Knowledge Graph (50 min)
Objective: Transform raw labeled clusters into a latent graph representation suited for semantic querying.

Latent Space Encoding (20 min):
The Encoder: How to compress a 3D cluster (e.g., a building) into a fixed-size latent vector (embedding).
Multimodal Embeddings: Strategies to map 3D geometry into the same latent space as text/images (using CLIP-based techniques or OpenShape). This allows the math of the geometry to align with the math of language.

Building the Scene Graph (10 min):
Nodes: Representing urban objects (Building A, Park B) as nodes containing their latent vector and metadata (height, volume).
Edges: Calculating spatial relationships (adjacency, "near", "contained within", "taller than").
Graph Construction: Converting the city tile into a graph structure (e.g., NetworkX).

Hands-on Exercise 3 (20 min):
Take the "Labeled Instances" from Session 2.
Generate a latent embedding for each instance (using a simplified autoencoder or statistical descriptor vector).
Compute topological relationships to build an adjacency graph.
Output: A serialized Graph object where nodes contain 3D embeddings.

SESSION 4: LLMs and Spatial Intelligence (30 min)
Objective: "Chat with the City" by plugging a Large Language Model into the latent scene graph.

The LLM-Graph Interface (10 min):
RAG for 3D (Retrieval Augmented Generation): How to structure a user query so the LLM can fetch information from the 3D Graph.
Vector Search: Converting a text query ("Find large industrial buildings near the forest") into a vector to query the node embeddings created in Session 3.

Agentic Workflows (10 min):
Function Calling: Teaching the LLM to call Python functions (e.g., measure_distance(), count_objects()) on the graph data.
Reasoning on Geometry: How the LLM synthesizes the graph edges (spatial relations) to answer complex questions regarding urban planning or surveillance.

Case Study Walkthrough (10 min):
Demo: A "City Copilot" system.
User Query: "Identify all residential buildings over 15 meters tall that have high vegetation within 5 meters."
System Action: LLM parses query -> Filters graph by Height and Class -> Traverses "Near" edges to Vegetation nodes -> Returns IDs -> Highlights 3D map.

SESSION 5: Wrap-up and Future Directions (15 min)

Emerging Techniques (5 min):
Neural Radiance Fields (NeRF) for Aerial: Moving beyond points to continuous volumetric cities.
Foundation Models for Earth Observation: Prithvi, SatCLIP, and the future of multimodal geospatial learning.

Resource Guide (5 min):
Open City Datasets (Helsinki 3D, Dublin LiDAR).
Key Libraries: LangChain, PDAL, Open3D.

Q&A and Discussion (5 min):
Open forum on scaling these pipelines to country-wide datasets.




Secretariat Contacts
e-mail: gistam.secretariat@insticc.org

Tutorial on
Exploring impacts and drivers of extreme fires using ESA-CCI Essential Climate Variables


Instructors

Amina Maroini
ESA-CCI Knowledge Exchange Team
United Kingdom
 
Brief Bio
Amina Maroini is currently part of the ESA Climate Change Initiative Knowledge Exchange project, where she develops and deliver training activities designed to make ESA-CCI data accessible for diverse audiences.
Lisa Beck
DWD
Germany
 
Brief Bio
Dr. Lisa Beck (Deutscher Wetterdienst - German Meteorological Service) is an atmospheric scientist with a background in experimental atmospheric research. During her PhD and postdoctoral work, she specialised in particle formation and precursor gases, with a particular focus on the Antarctic and Arctic atmospheres as well as the upper troposphere above the tropical rainforest. She is currently part of the ESA Climate Change Initiative Knowledge Exchange project, where she works in climate outreach.
Abstract

Understanding where and how climate change impacts different areas of the Earth system require user-friendly approaches to climate data analyses. Using a tailored Python package, the European Space Agency's (ESA) Climate Change Initiative’s (CCI) Toolbox, this hands-on tutorial will demonstrate how Essential Climate Variables (ECV) data can be easily accessed and used as evidence for climate change impacts and driving pressures.

As a case study to demonstrate how to access, analyse and interpret ECVs, singularly and in combination, the tutorial will focus on the October 2017 catastrophic wildfires which ravaged Northern Portugal and Northwestern Spain, burning approximately 500,000 hectares. Participants will analyse this extreme weather event using open-source satellite-derived ECV data from ESA’s CCI. They will learn to process and visualise relevant ESA-CCI ECVs datasets to investigate both the environmental pressures (e.g., changes in soil moisture and land surface temperature) that contributed to the extreme wildfires, the extent and severity of the burns, and the impacts on the above ground biomass and other parameters.


Keywords

Essential Climate Variables, Climate Impacts, Climate-Related Pressures, Biomass, Fire, Soil Moisture, Land Surface Temperature, Land Cover

Aims and Learning Objectives

This tutorial aims to teach participants practical skills for accessing, visualising and analysing satellite-derived data from the ESA Climate Change Initiative (CCI) data store, using a dedicated Python package: the CCI-Toolbox. By the end of this tutorial, participants will be able to retrieve datasets from the ESA CCI store, create visualisations and conduct basic analyses to identify trends and patterns of ECVs in a changing climate

Target Audience

Policy makers, academics and educators, students

Prerequisite Knowledge of Audience

The tutorial material will offer exercises for all levels, from complete beginners to advanced learners.

Detailed Outline

The tutorial will last 2 hours.

1. Introduction: (15 mins)

- Presentation of the ESA CCI Knowledge Exchange Project
- Presentation of different ECVs
- Presentation of learning objectives of the tutorial
- Presentation of the CCI-Toolbox library
- Presentation of the CCI-KE JupyterHub

2. Users log in to the Jupyter Hub to access the coding environment (15 mins)

- Participants are recommended to register before, a how-to document guide will be shared)

3. Live coding session (45 mins):

- Presenters will walk-through the notebook step-by-step

Individual work and Q&A (45 mins):

- Questions tailored to different levels of Python proficiency will be added at the end of the Notebook, with hint answers for self-paced work.
- Presenters will be available to assist participants




Secretariat Contacts
e-mail: gistam.secretariat@insticc.org

footer