Giuseppe Parrella, Konstantinos Papathanassiou and Matteo Pardini (German Aerospace Center (DLR)
James L Garrison (Purdue University), Estel Cardellach (Institute of Space Sciences, ICE-CSIC, IEEC), Adriano Camps (Universitat Politecnica de Catalunya -BarcelonaTech, UPC)
Ronny Hänsch (Technische Universität Berlin), Yuliya Tarabalka (LuxCarta Technology, France), Devis Tuia (Wageningen University and Research), Bertrand Le Saux (ONERA)
Mihai Datcu (DLR), Feng Xu (Fudan University), Akira Hirose (The University of Tokyo)
Rémi CRESSON (IRSTEA), Kenji OSE (UMR TETIS)
Ramona Pelich, Marco Chini (Luxembourg Institute of Science and Technology), Wataru Takeuchi (University of Tokyo), Young-Joo Kwak (NILIM, Ministry of Land, Infrastructure, Transport and Tourism Japan), Vitaliy Yurchenko (iGeo AS)
FD-7 - Radiometric Correction of Sentinel-2 images using pseudo-invariant areas [Canceled]
Huaguo Huang (Beijing Forestry University)
Mauro Dalla Mura (GIPSA-lab Grenoble Institute of Technology), Andrea Garzelli (University of Siena), Gemine Vivone (University of Salerno)
HD-3 - UAV Hyperspectral Remote Sensing [Canceled]
Motoyuki Sato (Tohoku University)
Paolo de Matthaeis (NASA Goddard Space Flight Center, USA), Yan Soldo (NASA Goddard Space Flight Center, USA), Mingliang Tao (Northwestern Polytechnical University, China)
Koreen Millard, Sarah Banks, Amir Behnamian (Environment and Climate Change Canada)
Paolo Pasquali (sarmap s.a.)
Fabrizio Lombardini (University of Pisa)
Giuseppe Parrella, Konstantinos Papathanassiou and Matteo Pardini (German Aerospace Center (DLR)
Description:
This one-day tutorial aims to provide the basic knowledge and understanding for three state-of-the-art Synthetic Aperture Radar (SAR) remote sensing techniques: Synthetic Aperture Radar Polarimetry (Pol-SAR), Polarimetric SAR Interferometry (Pol-InSAR), and Polarimetric SAR Tomography (Pol-TomoSAR). Especially the fact that future spaceborne SAR configurations will be able to perform polarimetric, interferometric and tomographic measurements necessitates the understanding of these techniques and their synergies.
The three techniques are presented and discussed starting from their basic principles up to the definition and generation of information products. A common framework will be used to facilitate the elaboration of their interconnections and to develop the link between them. Emphasis is given to the information content of each of the datasets and its dependency on the acquisition configuration is elaborated. Modelling and inversion approaches, signal processing techniques and applications in forestry, agriculture and cryosphere are addressed for each of the three techniques in the context of future spaceborne missions. Emphasis is placed on highlighting the unique characteristics and fundamental limitations of each technique in order to work out their complementarities. For the demonstration and validation, data and results from actual space- and air-borne campaigns and experiments at different frequencies are used.
Tutorial Learning Objectives:
The tutorial is fully interdisciplinary and aims to stimulate novice and experienced remote sensing users in the research and development of scientific concepts and applications relying on these techniques.
James L Garrison (Purdue University), Estel Cardellach (Institute of Space Sciences, ICE-CSIC, IEEC), Adriano Camps (Universitat Politecnica de Catalunya -BarcelonaTech, UPC)
Description:
Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture.
Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry.
GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). Future mission proposals, such as GEROS-ISS (GNSS ReEflectometry, Radio-Occultation and Scatterometry on the International Space Station) and GNSS Transpolar Earth Reflectometry exploriNg System (G-TERN) will demonstrate new GNSS-R measurements of sea surface altimetry and sea ice cover, respectively. Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones to be launched in the coming three years (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R).
Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from P-band (230 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp.
This all-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms.
Tutorial Learning Objectives:
After attending this tutorial, participants should have an understanding of:
Materials and Requirements:
Copy of presentation.
Ronny Hänsch (Technische Universität Berlin), Yuliya Tarabalka (LuxCarta Technology, France), Devis Tuia (Wageningen University and Research), Bertrand Le Saux (ONERA)
Description:
Despite the wide and often successful application of machine learning techniques to analyse and interpret remotely sensed data, the complexity, special requirements, as well as selective applicability of these methods often hinders to use them to their full potential. The gap between sensor- and application-specific expertise on the one hand, and a deep insight and understanding of existing machine learning methods often leads to suboptimal results, unnecessary or even harmful optimizations, and biased evaluations. The aim of this tutorial is twofold: First, spread good practices for data preparation: Inform about common mistakes and how to avoid them (e.g. dataset bias, non-iid samples), provide recommendations about proper preprocessing and initialization (e.g. data normalization), and state available sources of data and benchmarks. Second, present efficient and advanced machine learning tools: Give an overview of standard machine learning techniques and when to use them (e.g. standard regression and classification techniques, clustering, etc.), as well as introducing the most modern methods (such as random fields, ensemble learning, and deep learning).
Tutorial Learning Objectives:
Materials and Requirements:
Slides will be made available online before the tutorial. A website will be set up for downloads and additional resources.
The tutorial will collaborate with the Code Workshop where specific methods can be tested hands on.
Mihai Datcu (DLR), Feng Xu (Fudan University), Akira Hirose (The University of Tokyo)
Description:
In the big data era of earth observation, deep learning and other data mining technologies become critical to successful end applications. Over the past several years, there has been exponentially increasing interests related to deep learning techniques applied to remote sensing including not only hyperspectral imagery but also synthetic aperture radar (SAR) imagery.
This tutorial has the following three parts.
The first part introduces the basic principles of machine learning, and the evolution to deep learning paradigms. It presents the methods of stochastic variational and Bayesian inference, focusing on the methods and algorithms of deep learning generative adversarial networks. Since the data sets are organic part of the learning process, the EO dataset biases pose new challenges. The tutorial answers to open questions on relative data bias, cross-dataset generalization, for very specific EO cases as multispectral, SAR observation with a large variability of imaging parameters and semantic content.
The second part introduces the theory of deep neural networks and the practices of deep learning-based remote sensing applications. It introduces the major types of deep neural networks, the backpropagation algorithms, programming toolboxes, and several examples of deep learning-based remote sensing imagery processing. The last part focuses upon data treatment of and applications to phase and polarization in SAR data. Since SAR is a coherent observation, its data properties are quite special and useful for our social activities to provide us with specific feature extraction and discovery. This part deals with deep learning in complex-amplitude and polarization domains as well as so-called data structuration of such multimodal processing.
Tutorial Learning Objectives:
The first part of the tutorial is expected to bring a joint understanding of the “classical” machine learning and the generative adversarial networks indicating integrated optimal solutions in complex EO applications, including the choice or generation of labeled data sets and the biases influence in validation or benchmarking.
Through the second part of the tutorial, participants are expected to understand the basic theory for deep neural networks including convolutional neural network, backpropagation algorithm, etc., and learn the relevant skills including network design, hyper-parameter tuning, training algorithm, dataset preparation, toolbox usage and result analyses and diagnosis.
The last part is dedicated in particular to complex-valued and quaternion neural networks for dealing with such coherent information important in InSAR and PolSAR systems.
Materials and Requirements:
Slides: machine learning, generative adversarial networks, training strategy data sets biases. deep learning basic theory, deep learning practices with tensorflow/matconvnet examples with multispectral and SAR/InSAR data/codes.
Rémi CRESSON (IRSTEA), Kenji OSE (UMR TETIS)
Description:
This tutorial explains how to use deep learning techniques on real-world remote sensing images, with user-oriented (No coding skills required!), open source software. After a quick summary of deep learning techniques applied to image and signal processing, the tutorial presents how to sample images and ground truth, create and train deep networks, and use them to generate land cover maps.
Tutorial Learning Objectives:
Summary Deep Learning backgrounds, problematic and key concepts for enabling Deep Learning in remote sensing images processing
Materials and Requirements:
The software involved in this tutorial are open-source. We will use the Orfeo ToolBox with the OTBTF remote module to process remote sensing images. QGIS will be used to visualize geospatial data (alternatively, participants can use their favorite GIS software).
A virtual machine (running on VirtualBox) will be provided to all participants, including software, data, and exercises solutions. A docker image will also be provided.
Participants who want to follow the exercices should download and install the virtual environment before the tutorial. These files will be available for download soon, as well as the data involved in the exercices.
Ramona Pelich, Marco Chini (Luxembourg Institute of Science and Technology), Wataru Takeuchi (University of Tokyo), Young-Joo Kwak (NILIM, Ministry of Land, Infrastructure, Transport and Tourism Japan), Vitaliy Yurchenko (iGeo AS)
Description:
In recent years, natural disasters, i.e., hydro-geo-meteorological hazards and risks, have been frequently experienced by both developing and developed countries. For instance, in July 2018, Japan has been affected by a typhoon associated with torrential rainfall that have triggered cascading and interacting hazards such as catastrophic landslides and flash floods in south-west Japan. Thus, assessing the utility of Earth Observation (EO) data in a timely manner when a disaster occurs, and its limitations are essential sources of information in early-stage emergency response. In this framework, interpreting and visualizing EO data along with proposing algorithms that can systematically extract meaningful information in a timely manner, are necessary requirements for EO-based disaster and hazards monitoring. With this aim, the tutorial provides theoretical and practical knowledge for mapping hazard and managing natural disasters using advanced satellite EO data including both Synthetic Aperture Radar (SAR) and Optical data. This tutorial gives a comprehensive understanding of algorithms and methods that are applied for mapping changes by means of EO data, available immediately after a disaster occurs. Several lectures focused on floods and landslides along with a hands-on session will give the opportunity to all participants to learn more about the practical EO tools available for rapid-response information and also about advanced EO-based algorithms.
Tutorial Learning Objectives:
The aim of this tutorial is to provide a series of substantial and balanced presentations for the use of EO data in disaster and hazard monitoring. A comprehensive introduction along with several illustrative examples will show the use of both space-borne SAR and Optical sensors for mapping various types of damages caused by different disasters. Then we will focus our attention on Floods and Landslides, as particular types of disasters with important consequences at a global scale. For flood monitoring, a detailed presentation about the use of space-borne SAR data for flood monitoring will be given and will include both theoretical aspects and experimental results. This lecture will include several illustrations of Sentinel-1A&B, ALOS/ALOS-2 SAR images. The next lecture will present several methodologies employed for Optical flood monitoring along with illustrative results. Then, the landslide lecture, firstly presenting the landslide types, will give details about EO-based landslide monitoring methodologies along with experimental results containing on-site data. In addition to the detailed lectures concerning floods and landslides, several EO-based platforms that allow performing rapid disaster mapping will be presented.
Materials and Requirements:
The use of a PC is optional and the participants do not need to install any software in advance since we plan to rely on online demos/platforms and illustrative slides for the hands-on part of the tutorial. However, if the participants would like to practice the demos in parallel with the presentation, they just require a PC with a Wi-Fi connection.
Xavier Pons (Universitat Autònoma de Barcelona), Jordi Cristobal (University of Alaska Fairbanks), Lluís Pesquer (CREAF)
Description:
At present, GRUMETS research group (http://www.grumets.uab.cat/index_eng.htm) is carrying out a project to generate an extensive European (and larger) pseudo-invariant areas (PIA) dataset of surface reflectance values. These areas, will allow to compare and calibrate images taken from different sensors, such as Sentinel-2A and Sentinel-2B, as well as processing highly coherent time series from remote sensing data from different sensors. With this aim, we propose a course that will focus in radiometric correction (atmospheric and topographic) of Sentinel-2 images using pseudo-invariant areas. Concretely, the course will introduce the foundations associated to the radiometric correction in the solar spectrum of Remote Sensing images, in particular, to the Sentinel-2A and Sentinel-2B images. Subsequently, it will focus on explaining the automatic methodology proposed by Pons et al. (2014) and applied in other more recent research (Padró et al. 2018). The course will deal with theoretical and practical issues, and the students can replicate the proposed practical exercises by themselves. Finally, the quality of the results and the performance and possibilities of the automatic process will be evaluated.
Tutorial Learning Objectives:
Materials and Requirements:
Huaguo Huang (Beijing Forestry University)
Description:
Many radiative transfer models have been successfully developed in optical, thermal, lidar and microwave regions. However, they are generally fighting their own battles. There has become a visible gap for decades between optical and microwave modellers. Filling this gap will lead to a significantly better understanding on data fusion. The aim of this tutorial is twofold. First, I will introduce gaps and bridges of concepts, equations and implementations on optical and microwave models. Second, I will introduce how to use the 3D model RAPID to simulate optical BRDF, directional temperature, lidar waveform, point cloud and microwave backscattering.
Tutorial Learning Objectives:
Materials and Requirements:
Mauro Dalla Mura (GIPSA-lab Grenoble Institute of Technology), Andrea Garzelli (University of Siena), Gemine Vivone (University of Salerno)
Description:
Pansharpening aims at fusing a multispectral and a panchromatic image in order to obtain an image with the spectral resolution of the former and the spatial resolution of the latter. Pansharpening constitutes an important preliminary step (see, for instance, change detection and visual image analysis) resulting crucial for several remote sensing tasks (e.g., disaster management and environmental monitoring). In the last decades many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking and standardized implementations of the algorithms makes a thorough evaluation and comparison of the different pansharpening techniques difficult to achieve. The recent paper in [1], which is co-authored by the proponents of this tutorial, attempts to fill this gap by providing a critical description and extensive comparisons of some of the state of-the-art pansharpening methods. This tutorial will be mainly based on this work and the related pansharpening MATLAB toolbox [2].
The tutorial will be organized in five sections. Firstly, an introduction to the pansharpening problem will be provided. A classification of pansharpening techniques will be shown. The second section will be devoted to the description of the main algorithms, considering details related to their implementations. Algorithms belonging to the two main pansharpening classes (i.e., component substitution and multi-resolution analysis) will be considered together with some more recent instances of the so-called “third generation” (e.g., compressive sensing). Afterwards, the quality assessment will be presented. The two main protocols for the assessment of the pansharpening products will be introduced and compared by pointing out their pros and cons. Then, a critical comparison among the described pansharpening approaches will be performed exploiting the practices for the quality assessment learned by the tutorial. The reproducibility of the presented experimental analysis will be a key aspect of this tutorial. The experiments will be carried out by using an updated version of the MATLAB toolbox in [2]. This new version will be distributed to the participants. Finally, hints about the extension of the classical pansharpening problem to the fusion of hyperspectral data and new perspectives about this very challenging task will be presented to the audience. The targets of this tutorial are scientists, remote sensing practitioners and students who either want to approach to the pansharpening problem or want to improve their knowledge in this research field both from a theoretical and a practical point of view. A general basic background of image and signal processing could be useful for fruitfully attending the tutorial.
References
[1] G. Vivone, L. Alparone, J. Chanussot, M. D. Mura, A. Garzelli, G. A. Licciardi, R. Restaino, and L. Wald, “A critical comparison among pansharpening algorithms”, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2565–2586, May 2015.
[2] “Open Remote Sensing”, https://openremotesensing.net/knowledgebase/a-critical-comparison-among-pansharpening-algorithms/ (accessed on 25 September 2018).
Tutorial Learning Objectives:
The main goals of this tutorial can be summarized as follows:
Materials and Requirements:
A MATLAB Toolbox will be distributed to the audience to aid the reproducibility of the experimental analysis performed during this tutorial. Thus, the audience had better take his/her own laptop with a MATLAB (MATLAB 2017b or later). Image Processing Toolbox, Optimization Toolbox, Signal Processing Toolbox, Statistics and Machine Learning Toolbox had better be installed in the laptop.
Motoyuki Sato (Tohoku University)
Description:
Near Range Radar and Ground Penetrating Radar (GPR) are specialized forms of radar, which has been used for subsurface sensing and other imaging. GPR has widely been used for detection of buried utilities such as pipes and cables. This technique is also highly sensitive to water content in soil, therefore, GPR is very suitable for environment study, too. Recently, Ultra Wide Band (UWB) technology has gathered interest, however, its frequency range has been used in GPR for a long time, and we can find many similar aspects. Typical application of UWB radar is detection of objects in air, however, most of the fundamental signal acquisition and signal processing schemes are almost as same as that has been used in GPR. In this tutorial course, we will introduce the fundamental principles of GPR and UWB radar technologies to potential users, students and researchers. Then we will discuss more advanced and recent topic related to GPR, which include MIMO radar. We will also cover topics of GB-SAR (Ground Based Synthetic Aperture Radar).
Tutorial Learning Objectives:
The course will cover electromagnetic wave propagation and reflection in material, antennas for GPR, GPR system, GPR survey design, signal processing, and image reconstruction, Then we will introduce examples of applications of GPR and UWB radars, which include our recent activities for humanitarian demining by ALIS. GPR technology is closely related to Electromagnetic wave theory and signal processing technology. Therefore, I believe this course is also very useful for researchers and students who are familiar with electrical communications and signal processing theory.
Materials and Requirements:
PPT handout will be delivered. Signal processing on sample software using PC will be planned.
Paolo de Matthaeis (NASA Goddard Space Flight Center, USA), Yan Soldo (NASA Goddard Space Flight Center, USA), Mingliang Tao (Northwestern Polytechnical University, China)
Description:
The use of the electromagnetic spectrum for different applications, including for example telecommunications and radiolocation, is continually increasing. As a result, microwave remote sensing instruments are experiencing Radio Frequency Interference (RFI) more and more often. This happens even in frequency ranges allocated exclusively to passive services, such as microwave radiometry, due to illegal transmitters and out-of-band emissions from systems operating in adjacent bands. The presence of RFI is always detrimental to scientific missions. When detected, RFI causes information loss and reduces measurement accuracy; when not detected, it produces errors in measurements that are not recognized as such, therefore potentially leading to wrong conclusions. In some cases, the presence of RFI can entirely jeopardize the objectives of a mission. RFI represents a significant threat to microwave remote sensing sensors and requires proper attention in all future missions.
This tutorial will provide an overall review of spectrum management definitions and processes with particular attention on those for microwave remote sensing, including to frequency allocations enforcement of Radio Regulations, and then will shift its focus on RFI affecting microwave sensors and illustrate the techniques employed to detect and reduce the impact of RFI in both passive and active instruments. This tutorial can be very useful for anyone interested in learning about RFI, from recently graduated engineers who seek a career development in the remote sensing community to mission managers and scientists that are looking for possible ways to deal with the presence of RFI.
Tutorial Learning Objectives:
Attendees will become familiar with the RFI issues encountered by remote sensing instruments. They will learn basic spectrum management processes and principles, such as frequency allocations and band sharing criteria to avoid interference between services, with particular focus on remote sensing. They will be given an overview of the various type of RFI affecting passive and active microwave sensors and be introduced to the most important techniques used to detect interference and reduce its impact on the measurements.
Materials and Requirements:
General knowledge of microwave remote sensing principles and basic engineering background.
Koreen Millard, Sarah Banks, Amir Behnamian (Environment and Climate Change Canada)
Description:
This tutorial focuses on providing technical guidelines for those who are interested in using multi-sensor multi-temporal remote sensing data streams for land cover classification and monitoring with random forests in the following aspects:
Tutorial Learning Objectives:
To achieve effective land cover classification with the Random Forest algorithm, users need to address the challenges related to training/validation data and predictor variable selection. Considering the recent availability of free temporal data streams from different sensors (e.g., optical and/or SAR), users can possibly generate a large number of predictor variables for a given classification problem. However, recent studies have shown that only a few of the generated variables from complementary sensors with the correct temporal combination will be adequate to generate products with high user and producer accuracies. This is mainly because the separability of the classes in a given problem is dependent on the selection of the correct predictor variables. For example, previous studies have shown that the variable importance measures, such as the Gini index or the mean decrease in accuracy, might be biased in the presence of correlated variables, making variable selection a challenging task in operational settings.
Materials and Requirements:
Some papers related to this tutorial will be delivered.
It is recommended that attendees have a laptop with a working version of Python (ver.2.7) or R. The following modules and packages are recommended to be installed on the laptops. Sample datasets and sample Python and R scripts that are going to be used by the speakers during the tutorial will be provided by the instructors using a secure FTP site.
Fabrizio Lombardini (University of Pisa)
Description:
Thanks to the capability of providing direct physical measurements, synthetic aperture radar (SAR) Interferometry allowing generation of digital elevation models and monitoring of possible displacements to a mm/year order, is one of the techniques that have most pushed the applications of SAR to a wide range of scientific, institutional and commercial areas, and it has provided significant returns to the society in terms of improvements in the risk monitoring. SAR images relative to a same scene and suitable for interferometric processing are today available for most of the Earth, and their number is exponentially growing. Archives associated to SAR spaceborne sensors are filled by data collected with time and observation angle diversity (multipass-multibaseline data); moreover, current system trends in the SAR field involve clusters of cooperative formation-flying satellites with capability of multiple simultaneous acquisitions (tandem SAR systems), airborne systems with multibaseline acquisition capability in a single pass are also available, and unmanned air vehicles with capability of differential monitoring of rapid phenomena are being experimented.
In parallel, processing techniques have been developed, evolutions of the powerful SAR Interferometry, aimed at fully exploiting the information lying in such huge amount of multipass-multibaseline data, to produce new and/or more accurate measuring and information extraction functionalities. Focus of this tutorial is on processing methods that, by coherently combining multiple SAR images at the complex (phase and amplitude) data level, differently from phase-only Interferometry, allow improved or extended imaging and differential monitoring capabilities, in terms of accuracy and unambiguous interpretation of the measurements.
The tutorial will cover in particular interrelated techniques that have shaped in the recent years an emerged branch of SAR interferometric remote sensing, Tomographic SAR Imaging and Information Extraction; this is playing an important role in the development of next generation of SAR products and will enhance the application spectrum of SAR systems in Earth observation, in particular for the analysis and monitoring of complex scenarios such as urban/critical infrastructure and forest or more generally volumetric scenes.
After briefly recalling the basic concept of SAR Interferometry, multibaseline/multipass Tomographic SAR techniques will be framed, presented, and discussed with respect to the specific applications. These techniques are 1) Multibaseline 3D Tomography, furnishing the functionality of layover scatterers elevation separation, to locate different scatterers interfering in the same pixel in complex surface geometries of man-made structures, causing signal garbling in high frequency SARs, and the functionality of full 3D imaging of volumetric scatterers, to provide a profiling of the scattering distribution also along the elevation direction for unambiguous extraction of physical and geometrical parameters in geophysical structures with vertical stratification, sensed by low frequency SARs; 2) Multipass 4D (3D+Time) and higher order Differential Tomography of multiple layover scatterers with slow deformation motions, a more recent and very promising Multidimensional Imaging mode, crossing the bridge between Differential Interferometry and Multibaseline Tomography.
Basic concepts, signal models and most diffused processing techniques for 3D/4D Tomographic SAR Imaging will be described in the array beamforming processing i.e. spatial spectral estimation framework, Fourier based, and of super-resolution kind (adaptive, and model-based). A number of experimental results obtained with real data, multibaseline single-pass and multipass airborne, and multipass spaceborne, in X-, C-, L-, and P-band (AER-II, E-SAR, ERS-1/2, COSMO-SkyMed, TerraSAR-X), over infrastructure, urban, and forest areas, will be presented to show current achievements in real cases and the important application potentials of these emerged techniques. Recent new trends in the area will be finally discussed, including hints to compressive sensing Tomography, and to concepts of higher-order ("5D") Tomography robust to temporal decorrelation and Differential Tomography of non-uniform deformation motions.
Tutorial Learning Objectives:
The objective of the tutorial, that is the sequel of a series of 10 tutorials successfully presented in the last decade mostly at IEEE IGARSS, is to give the attendees a general large scope overview of the emerged area of synthetic aperture radar Tomography. Various aspects will be tackled spanning from motivations, to physical principles, data models basic processing algorithms and performance limitation factors, concepts of advanced algorithms, real airborne and spaceborne data examples of both urban and forest applications, and hints to recent trends in the area. The tutorial so intends to allow the attendees to easily enter in this nowadays somewhat complex and large technical area, to begin their related studies, orient their research activities, or check interest and potential for possible industrial/agency activities.
Materials and Requirements:
Handout of about 70 viewgraphs, with concepts, models, processing algorithms, real data results, and list of references.