Tutorials

Full-Day Tutorials

July 28, 9:30-17:30

FD-1 – From SAR Polarimetry to Polarimetric SAR Interferometry and Polarimetric SAR Tomography: Potentials, Limitations and Complementarities in the Context of Future Spaceborne Missions.

Giuseppe Parrella, Konstantinos Papathanassiou and Matteo Pardini (German Aerospace Center (DLR)

FD-2 - Remote Sensing with Reflected Global Navigation Satellite System and Signals of Opportunity

James L Garrison (Purdue University), Estel Cardellach (Institute of Space Sciences, ICE-CSIC, IEEC), Adriano Camps (Universitat Politecnica de Catalunya -BarcelonaTech, UPC)

FD-3 - Machine Learning in Remote Sensing - Best practices and recent solutions

Ronny Hänsch (Technische Universität Berlin), Yuliya Tarabalka (LuxCarta Technology, France), Devis Tuia (Wageningen University and Research), Bertrand Le Saux (ONERA)

FD-4 - Earth Observation Big Data Intelligence: theory and practice of deep learning and big data mining

Mihai Datcu (DLR), Feng Xu (Fudan University), Akira Hirose (The University of Tokyo)

FD-5 - Deep learning with the Orfeo ToolBox

Rémi CRESSON (IRSTEA), Kenji OSE (UMR TETIS)

FD-6 - Natural disasters and hazards monitoring using Earth Observation data

Ramona Pelich, Marco Chini (Luxembourg Institute of Science and Technology), Wataru Takeuchi (University of Tokyo), Young-Joo Kwak (NILIM, Ministry of Land, Infrastructure, Transport and Tourism Japan), Vitaliy Yurchenko (iGeo AS)

FD-7 - Radiometric Correction of Sentinel-2 images using pseudo-invariant areas

Xavier Pons (Universitat Autònoma de Barcelona), Jordi Cristobal (University of Alaska Fairbanks), Lluís Pesquer (CREAF)

Half-Day Tutorials

July 28, 9:30-12:45

HD-1 - Bridge 3D Radiative Transfer Simulations from optical, thermal, lidar to microwave

Huaguo Huang (Beijing Forestry University)

HD-2 - Pansharpening: from classical techniques to recent advances

Mauro Dalla Mura (GIPSA-lab Grenoble Institute of Technology), Andrea Garzelli (University of Siena), Gemine Vivone (University of Salerno)

HD-3 - UAV Hyperspectral Remote Sensing

Keshav Dev Singh (University of Saskatchewan)

HD-4 - Near Range and Ground Penetrating Radar (GPR) / UWB radar : Fundamentals to applications

Motoyuki Sato (Tohoku University)

Half-Day Tutorials

July 28, 14:15-17:30

HD-5 - Spectrum Management and Radio Frequency Interference (RFI) in Microwave Remote Sensing

Sandra Cruz-Pol (University of Puerto Rico), Paolo de Matthaeis (NASA Goddard Space Flight Center), Mingliang Tao (Northwestern Polytechnical University)

HD-6 - Random Forest Classification: Guidelines on Model Optimization, Variable and Training Selection

Koreen Millard, Sarah Banks, Amir Behnamian (Environment and Climate Change Canada)

HD-7 - Analysis of SAR Amplitude and Phase Time series for land applications

Paolo Pasquali (sarmap s.a.)

HD-8 - 3D/4D SAR Tomography: principles and applications

Fabrizio Lombardini (University of Pisa)


FD-1: From SAR Polarimetry to Polarimetric SAR Interferometry and Polarimetric SAR Tomography: Potentials, Limitations and Complementarities in the Context of Future Spaceborne Missions.

Giuseppe Parrella, Konstantinos Papathanassiou and Matteo Pardini (German Aerospace Center (DLR)

Description:

This one-day tutorial aims to provide the basic knowledge and understanding for three state-of-the-art Synthetic Aperture Radar (SAR) remote sensing techniques: Synthetic Aperture Radar Polarimetry (Pol-SAR), Polarimetric SAR Interferometry (Pol-InSAR), and Polarimetric SAR Tomography (Pol-TomoSAR). Especially the fact that future spaceborne SAR configurations will be able to perform polarimetric, interferometric and tomographic measurements necessitates the understanding of these techniques and their synergies.

  • SAR Polarimetry (Pol-SAR) relies on the measurement of the full-scattering matrix by transmitting and receiving at orthogonal polarisations. Polarimetric measurements are sensitive to the dielectric and geometric properties of scatterers. Accordingly, unique polarimetric applications are related to the qualitative and/or quantitative exploitation of this sensitivity.
  • Polarimetric SAR Interferometry (Pol-InSAR) is based on the coherent combination of (a rather small number of) SAR interferograms at different polarisations. The introduction of observations with angular diversity (i.e. non-zero baseline interferometric ones) provides sensitivity to the vertical distribution of scatterers and allows unique applications related to the vertical structure of natural and man-made volume scatterers.
  • Polarimetric SAR Tomography (Pol-TomoSAR) relies on a rather large number of acquisitions performed with a wide(r) angular range and allows the reconstruction of the 3-D radar reflectivity of natural and man-made volume scatterers. This opens the door for a wide range of unique geophysical products related to the 3D structure of volume scatterers.

The three techniques are presented and discussed starting from their basic principles up to the definition and generation of information products. A common framework will be used to facilitate the elaboration of their interconnections and to develop the link between them. Emphasis is given to the information content of each of the datasets and its dependency on the acquisition configuration is elaborated. Modelling and inversion approaches, signal processing techniques and applications in forestry, agriculture and cryosphere are addressed for each of the three techniques in the context of future spaceborne missions. Emphasis is placed on highlighting the unique characteristics and fundamental limitations of each technique in order to work out their complementarities. For the demonstration and validation, data and results from actual space- and air-borne campaigns and experiments at different frequencies are used.

Tutorial Learning Objectives:

The tutorial is fully interdisciplinary and aims to stimulate novice and experienced remote sensing users in the research and development of scientific concepts and applications relying on these techniques.

FD-2 - Remote Sensing with Reflected Global Navigation Satellite System and Signals of Opportunity

James L Garrison (Purdue University), Estel Cardellach (Institute of Space Sciences, ICE-CSIC, IEEC), Adriano Camps (Universitat Politecnica de Catalunya -BarcelonaTech, UPC)

Description:

Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture.

Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry.

GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). Future mission proposals, such as GEROS-ISS (GNSS ReEflectometry, Radio-Occultation and Scatterometry on the International Space Station) and GNSS Transpolar Earth Reflectometry exploriNg System (G-TERN) will demonstrate new GNSS-R measurements of sea surface altimetry and sea ice cover, respectively. Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones to be launched in the coming three years (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R).

Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from P-band (230 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp.

This all-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms.

Tutorial Learning Objectives:

After attending this tutorial, participants should have an understanding of:

  • The structure of GNSS signals, and how the properties of these signals enable remote sensing measurements, in addition to their designed purpose in navigation.
  • Generation and interpretation of a delay-Doppler map.
  • Fundamental physics of bistatic scattering of GNSS signals form rough surfaces and the relationship between properties of the scattered signal and geophysical variables (e.g. wind speed, sea surface height, soil moisture, ice thickness)
  • Conceptual design of reflectometry instruments.
  • Basic signal processing for inversion of GNSS-R observations.
  • Current GNSS-R satellite missions and the expected types of data to become available from them.

Materials and Requirements:

Copy of presentation.

FD-3 - Machine Learning in Remote Sensing - Best practices and recent solutions

Ronny Hänsch (Technische Universität Berlin), Yuliya Tarabalka (LuxCarta Technology, France), Devis Tuia (Wageningen University and Research), Bertrand Le Saux (ONERA)

Description:

Despite the wide and often successful application of machine learning techniques to analyse and interpret remotely sensed data, the complexity, special requirements, as well as selective applicability of these methods often hinders to use them to their full potential. The gap between sensor- and application-specific expertise on the one hand, and a deep insight and understanding of existing machine learning methods often leads to suboptimal results, unnecessary or even harmful optimizations, and biased evaluations. The aim of this tutorial is twofold: First, spread good practices for data preparation: Inform about common mistakes and how to avoid them (e.g. dataset bias, non-iid samples), provide recommendations about proper preprocessing and initialization (e.g. data normalization), and state available sources of data and benchmarks. Second, present efficient and advanced machine learning tools: Give an overview of standard machine learning techniques and when to use them (e.g. standard regression and classification techniques, clustering, etc.), as well as introducing the most modern methods (such as random fields, ensemble learning, and deep learning).

Tutorial Learning Objectives:

  • Overview of standard machine learning approaches (naive Bayes, Linear Discriminant Analysis, Support Vector Machines) and how to use them
  • Performance evaluation (correct sampling, cross-validation, iid data, metrics, data and benchmarks)
  • Introduction to sophisticated methods (Random Forests, Markov and Conditional Random Fields, Convolutional Neural Networks)

Materials and Requirements:

Slides will be made available online before the tutorial. A website will be set up for downloads and additional resources.

The tutorial will collaborate with the Code Workshop where specific methods can be tested hands on.

FD-4 - Earth Observation Big Data Intelligence: theory and practice of deep learning and big data mining

Mihai Datcu (DLR), Feng Xu (Fudan University), Akira Hirose (The University of Tokyo)

Description:

In the big data era of earth observation, deep learning and other data mining technologies become critical to successful end applications. Over the past several years, there has been exponentially increasing interests related to deep learning techniques applied to remote sensing including not only hyperspectral imagery but also synthetic aperture radar (SAR) imagery.

This tutorial has the following three parts.

The first part introduces the basic principles of machine learning, and the evolution to deep learning paradigms. It presents the methods of stochastic variational and Bayesian inference, focusing on the methods and algorithms of deep learning generative adversarial networks. Since the data sets are organic part of the learning process, the EO dataset biases pose new challenges. The tutorial answers to open questions on relative data bias, cross-dataset generalization, for very specific EO cases as multispectral, SAR observation with a large variability of imaging parameters and semantic content.

The second part introduces the theory of deep neural networks and the practices of deep learning-based remote sensing applications. It introduces the major types of deep neural networks, the backpropagation algorithms, programming toolboxes, and several examples of deep learning-based remote sensing imagery processing. The last part focuses upon data treatment of and applications to phase and polarization in SAR data. Since SAR is a coherent observation, its data properties are quite special and useful for our social activities to provide us with specific feature extraction and discovery. This part deals with deep learning in complex-amplitude and polarization domains as well as so-called data structuration of such multimodal processing.

Tutorial Learning Objectives:

The first part of the tutorial is expected to bring a joint understanding of the “classical” machine learning and the generative adversarial networks indicating integrated optimal solutions in complex EO applications, including the choice or generation of labeled data sets and the biases influence in validation or benchmarking.

Through the second part of the tutorial, participants are expected to understand the basic theory for deep neural networks including convolutional neural network, backpropagation algorithm, etc., and learn the relevant skills including network design, hyper-parameter tuning, training algorithm, dataset preparation, toolbox usage and result analyses and diagnosis.

The last part is dedicated in particular to complex-valued and quaternion neural networks for dealing with such coherent information important in InSAR and PolSAR systems.

Materials and Requirements:

Slides: machine learning, generative adversarial networks, training strategy data sets biases. deep learning basic theory, deep learning practices with tensorflow/matconvnet examples with multispectral and SAR/InSAR data/codes.

FD-5 - Deep learning with the Orfeo ToolBox

Rémi CRESSON (IRSTEA), Kenji OSE (UMR TETIS)

Description:

This tutorial explains how to use deep learning techniques on real-world remote sensing images, with user-oriented (No coding skills required!), open source software. After a quick summary of deep learning techniques applied to image and signal processing, the tutorial presents how to sample images and ground truth, create and train deep networks, and use them to generate land cover maps.

Tutorial Learning Objectives:

Summary Deep Learning backgrounds, problematic and key concepts for enabling Deep Learning in remote sensing images processing

  • Introduce the OTBTF, a plugin of the Orfeo ToolBox software. OTBTF uses internally TensorFlow, enabling to run deep nets on remote sensing images
  • From real world geospatial data (Spot-7 and Sentinel-2 images, and terrain truth vector data like OpenStreetMap), train deep nets and produce land cover maps using various state of the art deep learning architectures
  • In exercices, apply state of the art deep learning architectures: convolutional neural networks, hybrid classifiers (e.g. random forest classifier performing on deep nets features), semantic segmentation deep nets, multi-source deep nets (e.g. inputting separately multiple images from different resolutions in multiple-branch deep nets)

Materials and Requirements:

The software involved in this tutorial are open-source. We will use the Orfeo ToolBox with the OTBTF remote module to process remote sensing images. QGIS will be used to visualize geospatial data (alternatively, participants can use their favorite GIS software).

A virtual machine (running on VirtualBox) will be provided to all participants, including software, data, and exercises solutions. A docker image will also be provided.

Participants who want to follow the exercices should download and install the virtual environment before the tutorial. These files will be available for download soon, as well as the data involved in the exercices.

FD-6 - Natural disasters and hazards monitoring using Earth Observation data

Ramona Pelich, Marco Chini (Luxembourg Institute of Science and Technology), Wataru Takeuchi (University of Tokyo), Young-Joo Kwak (NILIM, Ministry of Land, Infrastructure, Transport and Tourism Japan), Vitaliy Yurchenko (iGeo AS)

Description:

In recent years, natural disasters, i.e., hydro-geo-meteorological hazards and risks, have been frequently experienced by both developing and developed countries. For instance, in July 2018, Japan has been affected by a typhoon associated with torrential rainfall that have triggered cascading and interacting hazards such as catastrophic landslides and flash floods in south-west Japan. Thus, assessing the utility of Earth Observation (EO) data in a timely manner when a disaster occurs, and its limitations are essential sources of information in early-stage emergency response. In this framework, interpreting and visualizing EO data along with proposing algorithms that can systematically extract meaningful information in a timely manner, are necessary requirements for EO-based disaster and hazards monitoring. With this aim, the tutorial provides theoretical and practical knowledge for mapping hazard and managing natural disasters using advanced satellite EO data including both Synthetic Aperture Radar (SAR) and Optical data. This tutorial gives a comprehensive understanding of algorithms and methods that are applied for mapping changes by means of EO data, available immediately after a disaster occurs. Several lectures focused on floods and landslides along with a hands-on session will give the opportunity to all participants to learn more about the practical EO tools available for rapid-response information and also about advanced EO-based algorithms.

Tutorial Learning Objectives:

The aim of this tutorial is to provide a series of substantial and balanced presentations for the use of EO data in disaster and hazard monitoring. A comprehensive introduction along with several illustrative examples will show the use of both space-borne SAR and Optical sensors for mapping various types of damages caused by different disasters. Then we will focus our attention on Floods and Landslides, as particular types of disasters with important consequences at a global scale. For flood monitoring, a detailed presentation about the use of space-borne SAR data for flood monitoring will be given and will include both theoretical aspects and experimental results. This lecture will include several illustrations of Sentinel-1A&B, ALOS/ALOS-2 SAR images. The next lecture will present several methodologies employed for Optical flood monitoring along with illustrative results. Then, the landslide lecture, firstly presenting the landslide types, will give details about EO-based landslide monitoring methodologies along with experimental results containing on-site data. In addition to the detailed lectures concerning floods and landslides, several EO-based platforms that allow performing rapid disaster mapping will be presented.

Materials and Requirements:

  1. Presentation materials.
  2. Compilation of research materials for supplementary details with respect to the ones presented.
  3. Data samples for hands-on exercise with open source tools (GPDO-HASARD, ESA-SNAP, QGIS, DIAS, etc.)

The use of a PC is optional and the participants do not need to install any software in advance since we plan to rely on online demos/platforms and illustrative slides for the hands-on part of the tutorial. However, if the participants would like to practice the demos in parallel with the presentation, they just require a PC with a Wi-Fi connection.

FD-7 - Radiometric Correction of Sentinel-2 images using pseudo-invariant areas

Xavier Pons (Universitat Autònoma de Barcelona), Jordi Cristobal (University of Alaska Fairbanks), Lluís Pesquer (CREAF)

Description:

At present, GRUMETS research group (http://www.grumets.uab.cat/index_eng.htm) is carrying out a project to generate an extensive European (and larger) pseudo-invariant areas (PIA) dataset of surface reflectance values. These areas, will allow to compare and calibrate images taken from different sensors, such as Sentinel-2A and Sentinel-2B, as well as processing highly coherent time series from remote sensing data from different sensors. With this aim, we propose a course that will focus in radiometric correction (atmospheric and topographic) of Sentinel-2 images using pseudo-invariant areas. Concretely, the course will introduce the foundations associated to the radiometric correction in the solar spectrum of Remote Sensing images, in particular, to the Sentinel-2A and Sentinel-2B images. Subsequently, it will focus on explaining the automatic methodology proposed by Pons et al. (2014) and applied in other more recent research (Padró et al. 2018). The course will deal with theoretical and practical issues, and the students can replicate the proposed practical exercises by themselves. Finally, the quality of the results and the performance and possibilities of the automatic process will be evaluated.

Tutorial Learning Objectives:

  • To showcase the methodology followed (its theoretical foundations and its practical orientation), to generate a European pseudo-invariant areas (PIA) dataset of surface reflectance values.
  • To introduce to the audience to the theoretical foundations associated to the radiometric correction in the solar spectrum of Remote Sensing images and, in particular, to the Sentinel-2A and Sentinel-2B images.
  • To present the methodology to radiometric correction (atmospheric and topographic) using pseudo-invariant areas.

Materials and Requirements:

  • Academic license (free and permanent) of MiraMon GIS and RS software will be distributed (http://www.miramon.cat/download/index.htm).
  • The audience should need a PC or Laptop equipped with Microsoft Windows system operating (Windows 7 or newer) or any sort of virtualisation.

HD-1 - Bridge 3D Radiative Transfer Simulations from optical, thermal, lidar to microwave

Huaguo Huang (Beijing Forestry University)

Description:

Many radiative transfer models have been successfully developed in optical, thermal, lidar and microwave regions. However, they are generally fighting their own battles. There has become a visible gap for decades between optical and microwave modellers. Filling this gap will lead to a significantly better understanding on data fusion. The aim of this tutorial is twofold. First, I will introduce gaps and bridges of concepts, equations and implementations on optical and microwave models. Second, I will introduce how to use the 3D model RAPID to simulate optical BRDF, directional temperature, lidar waveform, point cloud and microwave backscattering.

Tutorial Learning Objectives:

  1. Grasp the unified concept on optical and microwave models.
  2. Better understanding on the link of reflectance and backscattering.
  3. Operating the 3D model RAPID to simulate full band signals.

Materials and Requirements:

  • Online documents: http://www.3dforest.cn/en_rapid.html
  • Use of computer to run RAPID is highly recommended but not a must.
  • If with computer with Windows OS, please install RAPID software (RAPID v2)
  • Web-based RAPID may be available online.

HD-2 - Pansharpening: from classical techniques to recent advances

Mauro Dalla Mura (GIPSA-lab Grenoble Institute of Technology), Andrea Garzelli (University of Siena), Gemine Vivone (University of Salerno)

Description:

Pansharpening aims at fusing a multispectral and a panchromatic image in order to obtain an image with the spectral resolution of the former and the spatial resolution of the latter. Pansharpening constitutes an important preliminary step (see, for instance, change detection and visual image analysis) resulting crucial for several remote sensing tasks (e.g., disaster management and environmental monitoring). In the last decades many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking and standardized implementations of the algorithms makes a thorough evaluation and comparison of the different pansharpening techniques difficult to achieve. The recent paper in [1], which is co-authored by the proponents of this tutorial, attempts to fill this gap by providing a critical description and extensive comparisons of some of the state of-the-art pansharpening methods. This tutorial will be mainly based on this work and the related pansharpening MATLAB toolbox [2].

The tutorial will be organized in five sections. Firstly, an introduction to the pansharpening problem will be provided. A classification of pansharpening techniques will be shown. The second section will be devoted to the description of the main algorithms, considering details related to their implementations. Algorithms belonging to the two main pansharpening classes (i.e., component substitution and multi-resolution analysis) will be considered together with some more recent instances of the so-called “third generation” (e.g., compressive sensing). Afterwards, the quality assessment will be presented. The two main protocols for the assessment of the pansharpening products will be introduced and compared by pointing out their pros and cons. Then, a critical comparison among the described pansharpening approaches will be performed exploiting the practices for the quality assessment learned by the tutorial. The reproducibility of the presented experimental analysis will be a key aspect of this tutorial. The experiments will be carried out by using an updated version of the MATLAB toolbox in [2]. This new version will be distributed to the participants. Finally, hints about the extension of the classical pansharpening problem to the fusion of hyperspectral data and new perspectives about this very challenging task will be presented to the audience. The targets of this tutorial are scientists, remote sensing practitioners and students who either want to approach to the pansharpening problem or want to improve their knowledge in this research field both from a theoretical and a practical point of view. A general basic background of image and signal processing could be useful for fruitfully attending the tutorial.

References

[1] G. Vivone, L. Alparone, J. Chanussot, M. D. Mura, A. Garzelli, G. A. Licciardi, R. Restaino, and L. Wald, “A critical comparison among pansharpening algorithms”, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2565–2586, May 2015.

[2] “Open Remote Sensing”, https://openremotesensing.net/knowledgebase/a-critical-comparison-among-pansharpening-algorithms/ (accessed on 25 September 2018).

Tutorial Learning Objectives:

The main goals of this tutorial can be summarized as follows:

  • The first goal is to provide the participants with an overview on the pansharpening problem and the state-of-the-art pansharpening techniques together with their taxonomy;
  • Approaches belonging to the component substitution and the multi-resolution analysis classes will be detailed together with some well-established and powerful examples of the so-called “third generation” of pansharpening algorithms (e.g., compressive sensing);
  • Best practices to properly implement a pansharpening algorithm (e.g., how to interpolate the multispectral image to reach the panchromatic scale and how to equalize an image) will be also drawn;
  • Furthermore, an overview about the extension of the classical pansharpening problem to the fusion of hyperspectral data (e.g., hyperspectral pansharpening) will be also provided;
  • The last but not the least objective of the tutorial is to give an answer to difficult questions, such as: How to state that my algorithm is the best? How to quantify the performance of my fusion algorithm? Both qualitative (i.e., visual interpretation) and quantitative assessment of the fusion process outcomes will be deeply analyzed.

Materials and Requirements:

A MATLAB Toolbox will be distributed to the audience to aid the reproducibility of the experimental analysis performed during this tutorial. Thus, the audience had better take his/her own laptop with a MATLAB (MATLAB 2017b or later). Image Processing Toolbox, Optimization Toolbox, Signal Processing Toolbox, Statistics and Machine Learning Toolbox had better be installed in the laptop.

HD-3 - UAV Hyperspectral Remote Sensing

Dr. Keshav Dev Singh (University of Saskatchewan)

Description:

Airborne hyperspectral remote sensing technology are capable to detect early vegetation stresses even before visible symptoms become apparent. It integrates conventional imaging and spectroscopy knowledge to attain both spectral and spatial information from an object, here crop plants. Imaging spectroscopy provides detailed signatures (such as reflectance) of the biological samples due to the interaction between the electromagnetic radiation and contact material. It is a powerful tool in the study of host selection by selected insects (arthropod pests) and abilities to assess crop health via reflectance profiling (detection of crop responses to biotic and abiotic stressors). The abiotic stresses are due to drought (water shortage), extreme climate (heat, humid, cold, or freeze), excessive water-logging (flooding), salinity and metal toxicity (minerals) in soil. These stresses negatively affect growth, development, yield and seed quality of crops. During this tutorial we will study various economical (almonds, walnuts, strawberries, soybeans), specialty (tomatoes, gerbera, pea and faba-beans), and agronomic (canola, wheat, lentils, rice) crops to characterize spectral relationships between nutrient composition and biotic stresses. We will also learn phenotypic and genotypic relationships with respect to field breeding in some of the mentioned cereal crops. This tutorial will give a broad overview of numerous advantages of hyperspectral imaging (HSI) over conventional farming techniques, which will transform agriculture related research and provide innovative solutions to global food security. How we can improve the robustness of hyperspectral imagery to measure crop phenomes by developing high-throughput measurement systems. I will demonstrate the importance of UAV-based sensors to estimate plants phenotypic characteristics. For this study, the hyperspectral data of referred agricultural crops/plants were acquired using UAV-based “true push-broom” cameras [OCI Imager (OCI-UAV-D1000), BaySpec Inc., USA; and microHSI SHARK, Corning Inc., USA] mounted on DJI UAVs (S1000 Octocopter and Matrix-600), respectively. The acquired images are generally affected by solar light intensity, source-sensor geometry, and scattering, so a ground-based radio-spectrometer was used for continuous white calibration for acquired datasets. Therefore, the radiative transfer based theory of bi-directional reflectance (BRDF) will be explain to correct acquired datasets, which addresses non-linear factors arises due to multiple scattering. The narrow-band vegetation indices are generally used to estimate the crops growth status and plants density together with physiological traits that indicate stress and disease responses, water and nutrition relations, and photosynthetic capacity due to different factors. Vegetation indices explain how to pinpoint the areas covered by particular stress (nutrient, drought, and other deficiencies) on plants prior to fertilization or pesticides spray over whole agriculture field. It prevents crops from being wilt, infested or pest outbreaks. It also reduces the field scouting time, cost of spray and poisonousness in our biosphere.

This tutorial will give an information on how hyperspectral system give considerably more spectral accuracy, providing extraction of much more subtle information. How this technology would create a unique global resource for plant breeders, geneticists and agronomists seeking to develop new crop varieties and management practices at an unprecedented speed and scale. Throughout this tutorial, you will learn how to combine hyperspectral data with widely used RGB and multispectral datasets to provide new insights to the field of precision agriculture and high-throughput phenotypic research.

Tutorial Learning Objectives:

  • Airborne/UAS Remote Sensing in Precision Agriculture and Crop Phenotyping;
  • RGB, Multispectral and Hyperspectral Imaging data Acquisition and Processing;
  • Hyperspectral Imaging Spectroscopy Tools and Techniques;
  • How to do Data Calibration and Analysis for Sustainable Agro-ecosystem Studies;
  • How Farmers can Minimize Losses & the Cost of Inputs to Maximize Profit Margins;

Materials and Requirements:

  • Handouts of Airborne Remote Sensing Data Acquisition;
  • Data Calibration and Processing Notes;
  • Spectral Imaging Data Analysis Notes;
  • Precision Agriculture and Crops Phenotyping useful Literature;

The participants have to better bring their own laptops. At the start of the tutorial, some imagery data to work on using Pix4D, AgSoft, and Spectronon Pro software will be shared. These software can be easily get/download (trial version) by requesting using the following links (please check for any updates),

  1. 1.) AgSoft Request Link (30-days Trial):
    http://www.agisoft.com/downloads/request-trial/
  2. 2.) Pix4D Request Link (15-days Trial):
    https://cloud.pix4d.com/download/
    https://support.pix4d.com/hc/en-us/articles/115002495986-Software-download-and-installation#label1
  3. 3.) Resonon’s Spectronon Pro 2.11 (Open Source, Lifetime):
    https://downloads.resonon.com/categories/3/

We will also show how to work with ENVI and MATLAB for any data processing, as a demo on projector and provide necessary handouts for further study.

HD-4 - Near Range and Ground Penetrating Radar (GPR) / UWB radar : Fundamentals to applications

Motoyuki Sato (Tohoku University)

Description:

Near Range Radar and Ground Penetrating Radar (GPR) are specialized forms of radar, which has been used for subsurface sensing and other imaging. GPR has widely been used for detection of buried utilities such as pipes and cables. This technique is also highly sensitive to water content in soil, therefore, GPR is very suitable for environment study, too. Recently, Ultra Wide Band (UWB) technology has gathered interest, however, its frequency range has been used in GPR for a long time, and we can find many similar aspects. Typical application of UWB radar is detection of objects in air, however, most of the fundamental signal acquisition and signal processing schemes are almost as same as that has been used in GPR. In this tutorial course, we will introduce the fundamental principles of GPR and UWB radar technologies to potential users, students and researchers. Then we will discuss more advanced and recent topic related to GPR, which include MIMO radar. We will also cover topics of GB-SAR (Ground Based Synthetic Aperture Radar).

Tutorial Learning Objectives:

The course will cover electromagnetic wave propagation and reflection in material, antennas for GPR, GPR system, GPR survey design, signal processing, and image reconstruction, Then we will introduce examples of applications of GPR and UWB radars, which include our recent activities for humanitarian demining by ALIS. GPR technology is closely related to Electromagnetic wave theory and signal processing technology. Therefore, I believe this course is also very useful for researchers and students who are familiar with electrical communications and signal processing theory.

Materials and Requirements:

PPT handout will be delivered. Signal processing on sample software using PC will be planned.

HD-5 - Spectrum Management and Radio Frequency Interference (RFI) in Microwave Remote Sensing

Sandra Cruz-Pol (National Science Foundation, USA), Paolo de Matthaeis (NASA Goddard Space Flight Center, USA), Mingliang Tao (Northwestern Polytechnical University, China)

Description:

The use of the electromagnetic spectrum for different applications, including for example telecommunications and radiolocation, is continually increasing. As a result, microwave remote sensing instruments are experiencing Radio Frequency Interference (RFI) more and more often. This happens even in frequency ranges allocated exclusively to passive services, such as microwave radiometry, due to illegal transmitters and out-of-band emissions from systems operating in adjacent bands. The presence of RFI is always detrimental to scientific missions. When detected, RFI causes information loss and reduces measurement accuracy; when not detected, it produces errors in measurements that are not recognized as such, therefore potentially leading to wrong conclusions. In some cases, the presence of RFI can entirely jeopardize the objectives of a mission. RFI represents a significant threat to microwave remote sensing sensors and requires proper attention in all future missions.

This tutorial will provide an overall review of spectrum management definitions and processes with particular attention on those for microwave remote sensing, including to frequency allocations enforcement of Radio Regulations, and then will shift its focus on RFI affecting microwave sensors and illustrate the techniques employed to detect and reduce the impact of RFI in both passive and active instruments. This tutorial can be very useful for anyone interested in learning about RFI, from recently graduated engineers who seek a career development in the remote sensing community to mission managers and scientists that are looking for possible ways to deal with the presence of RFI.

Tutorial Learning Objectives:

Attendees will become familiar with the RFI issues encountered by remote sensing instruments. They will learn basic spectrum management processes and principles, such as frequency allocations and band sharing criteria to avoid interference between services, with particular focus on remote sensing. They will be given an overview of the various type of RFI affecting passive and active microwave sensors and be introduced to the most important techniques used to detect interference and reduce its impact on the measurements.

Materials and Requirements:

Use of own laptop encouraged but not required. Electronic or printed slides will be delivered.

HD-6 - Random Forest Classification: Guidelines on Model Optimization, Variable and Training Selection

Koreen Millard, Sarah Banks, Amir Behnamian (Environment and Climate Change Canada)

Description:

This tutorial focuses on providing technical guidelines for those who are interested in using multi-sensor multi-temporal remote sensing data streams for land cover classification and monitoring with random forests in the following aspects:

  • Selecting training/validation data for the classes of interest
  • Effective predictor variable selection and variable reduction for a given classification problem
  • Optimization of model inputs in the random forest algorithm
  • Computationally efficient scripting/coding using available commercial packages, such as Python and R

Tutorial Learning Objectives:

To achieve effective land cover classification with the Random Forest algorithm, users need to address the challenges related to training/validation data and predictor variable selection. Considering the recent availability of free temporal data streams from different sensors (e.g., optical and/or SAR), users can possibly generate a large number of predictor variables for a given classification problem. However, recent studies have shown that only a few of the generated variables from complementary sensors with the correct temporal combination will be adequate to generate products with high user and producer accuracies. This is mainly because the separability of the classes in a given problem is dependent on the selection of the correct predictor variables. For example, previous studies have shown that the variable importance measures, such as the Gini index or the mean decrease in accuracy, might be biased in the presence of correlated variables, making variable selection a challenging task in operational settings.

Materials and Requirements:

Some papers related to this tutorial will be delivered.

It is recommended that attendees have a laptop with a working version of Python (ver.2.7) or R. The following modules and packages are recommended to be installed on the laptops. Sample datasets and sample Python and R scripts that are going to be used by the speakers during the tutorial will be provided by the instructors using a secure FTP site.

  • Required Python modules: os, sys, math, arcpy or gdal, numpy, scipy, sklearn, matplotlib, pylab, collections
  • Required R packages: raster, randomforest, sp, rgdal

HD-7 - Analysis of SAR Amplitude and Phase Time series for land applications

Paolo Pasquali (sarmap s.a.)

Description:

The capability of acquiring data under any weather and illumination conditions, as well as the repeat capacity of any spaceborne system, make satellite SAR systems an ideal source of information for the monitoring of earth phenomena that evolve over the time.

Several satellite missions and constellations are nowadays operationally and regularly acquiring time series of SAR images, that bring information both in the amplitude and in the phase components of the radar signal. This practical tutorial will discuss different approaches for the processing and analysis of multi-temporal stacks of SAR images and to extract information for different applications, including:

  • Advanced Differential Interferometry, including PS and SBAS, as well as SAR Tomography;
  • Multi-temporal signature analysis and feature extraction in different frequencies and polarisations;
  • Automatic classification of natural and man-made features;
  • Small land deformation and stability of infrastructures;
  • Precision farming, food security, forest monitoring and certification;
  • Disasters preparedness and response;
  • Environmental monitoring.

The training will consist in theoretical slots, focusing on analysis and comparison of different methods for the mentioned techniques, each followed by a practical slot to verify what introduced on sample data and with adequate software tools, building a link between methods and real applications.

The practical exercises will exploit stacks of images obtained from the PALSAR-2, Sentinel-1 and other spaceborne missions, and the ENVI and SARscape software packages.

Tutorial Learning Objectives:

Hands-on experience the wealth of information that can be extracted from SAR data for several operational applications as soon as the time dimension of the acquisitions is fully exploited in the analysis.

Materials and Requirements:

Each participant will receive:

  • Tutorial presentation material;
  • Sample satellite SAR data;
  • ENVI + SARscape software demo licenses

Each participant shall bring a portable computer suitable for running the training exercises, running either the Windows or Linux OS, with enough disk space and processing capacities.

HD-8 - 3D/4D SAR Tomography: principles and applications

Fabrizio Lombardini (University of Pisa)

Description:

Thanks to the capability of providing direct physical measurements, synthetic aperture radar (SAR) Interferometry allowing generation of digital elevation models and monitoring of possible displacements to a mm/year order, is one of the techniques that have most pushed the applications of SAR to a wide range of scientific, institutional and commercial areas, and it has provided significant returns to the society in terms of improvements in the risk monitoring. SAR images relative to a same scene and suitable for interferometric processing are today available for most of the Earth, and their number is exponentially growing. Archives associated to SAR spaceborne sensors are filled by data collected with time and observation angle diversity (multipass-multibaseline data); moreover, current system trends in the SAR field involve clusters of cooperative formation-flying satellites with capability of multiple simultaneous acquisitions (tandem SAR systems), airborne systems with multibaseline acquisition capability in a single pass are also available, and unmanned air vehicles with capability of differential monitoring of rapid phenomena are being experimented.

In parallel, processing techniques have been developed, evolutions of the powerful SAR Interferometry, aimed at fully exploiting the information lying in such huge amount of multipass-multibaseline data, to produce new and/or more accurate measuring and information extraction functionalities. Focus of this tutorial is on processing methods that, by coherently combining multiple SAR images at the complex (phase and amplitude) data level, differently from phase-only Interferometry, allow improved or extended imaging and differential monitoring capabilities, in terms of accuracy and unambiguous interpretation of the measurements.

The tutorial will cover in particular interrelated techniques that have shaped in the recent years an emerged branch of SAR interferometric remote sensing, Tomographic SAR Imaging and Information Extraction; this is playing an important role in the development of next generation of SAR products and will enhance the application spectrum of SAR systems in Earth observation, in particular for the analysis and monitoring of complex scenarios such as urban/critical infrastructure and forest or more generally volumetric scenes.

After briefly recalling the basic concept of SAR Interferometry, multibaseline/multipass Tomographic SAR techniques will be framed, presented, and discussed with respect to the specific applications. These techniques are 1) Multibaseline 3D Tomography, furnishing the functionality of layover scatterers elevation separation, to locate different scatterers interfering in the same pixel in complex surface geometries of man-made structures, causing signal garbling in high frequency SARs, and the functionality of full 3D imaging of volumetric scatterers, to provide a profiling of the scattering distribution also along the elevation direction for unambiguous extraction of physical and geometrical parameters in geophysical structures with vertical stratification, sensed by low frequency SARs; 2) Multipass 4D (3D+Time) and higher order Differential Tomography of multiple layover scatterers with slow deformation motions, a more recent and very promising Multidimensional Imaging mode, crossing the bridge between Differential Interferometry and Multibaseline Tomography.

Basic concepts, signal models and most diffused processing techniques for 3D/4D Tomographic SAR Imaging will be described in the array beamforming processing i.e. spatial spectral estimation framework, Fourier based, and of super-resolution kind (adaptive, and model-based). A number of experimental results obtained with real data, multibaseline single-pass and multipass airborne, and multipass spaceborne, in X-, C-, L-, and P-band (AER-II, E-SAR, ERS-1/2, COSMO-SkyMed, TerraSAR-X), over infrastructure, urban, and forest areas, will be presented to show current achievements in real cases and the important application potentials of these emerged techniques. Recent new trends in the area will be finally discussed, including hints to compressive sensing Tomography, and to concepts of higher-order ("5D") Tomography robust to temporal decorrelation and Differential Tomography of non-uniform deformation motions.

Tutorial Learning Objectives:

The objective of the tutorial, that is the sequel of a series of 10 tutorials successfully presented in the last decade mostly at IEEE IGARSS, is to give the attendees a general large scope overview of the emerged area of synthetic aperture radar Tomography. Various aspects will be tackled spanning from motivations, to physical principles, data models basic processing algorithms and performance limitation factors, concepts of advanced algorithms, real airborne and spaceborne data examples of both urban and forest applications, and hints to recent trends in the area. The tutorial so intends to allow the attendees to easily enter in this nowadays somewhat complex and large technical area, to begin their related studies, orient their research activities, or check interest and potential for possible industrial/agency activities.

Materials and Requirements:

Handout of about 70 viewgraphs, with concepts, models, processing algorithms, real data results, and list of references.