Link to OceaniX calendar with all scheduled events and meetings: here

Webinars

In the context of AI Chair Oceanix and AI4OAC project, we organize a webinar every third Wednesday of the month, starting in October 2020. When possible, additional links with a video record of the webinar will be provided.

List of previous Webinars

Big Data, Big Computation, and Machine Learning in Numerical Weather Prediction


At RIKEN, we have been exploring a fusion of big data and big computation, and now with AI techniques and machine learning (ML). The new Japan’s flagship supercomputer “Fugaku” is designed to be efficient for both double-precision big simulations and reduced-precision machine learning applications, aiming to play a pivotal role in creating super-smart “Society 5.0.” Our group in RIKEN has been pushing the limits of numerical weather prediction (NWP) through two orders of magnitude bigger computations using the previous Japan’s flagship “K computer”. We achieved real-time 30-second-refresh predictions of sudden downpours up to 30 minutes in advance by fully exploiting big data from a novel Phased Array Weather Radar. Now with the new Fugaku, we have been exploring ideas for fusing Big Data Assimilation and AI. The data produced by NWP models become bigger and moving around the data to other computers for ML may not be feasible. Having a next-generation computer like Fugaku, good for both big NWP computation and ML, may bring a breakthrough toward creating a new methodology of fusing data-driven (inductive) and process-driven (deductive) approaches in meteorology. This presentation will introduce the most recent results from data assimilation and NWP experiments, followed by perspectives toward future developments and challenges of DA-AI fusion. date: 2021/02/17 (1:30 PM, Paris Time)

Takemasa Miyoshi (RIKEN)

journals.ametsoc.org

Webinar video

Slides

Combining data assimilation and machine learning to infer unresolved scale parametrisation

2020/12/16 (1:30 PM, Paris Time)

In recent years, machine learning (ML) has been proposed to devise data-driven parametrisations of unresolved processes in dynamical numerical models. In most cases, the ML training leverages high-resolution simulations to provide a dense, noiseless target state. Our goal is to go beyond the use of high-resolution simulations and train ML-based parametrisation using direct data, in the realistic scenario of noisy and sparse observations. The algorithm proposed in this work is a two-step process. First, data assimilation (DA) techniques are applied to estimate the full state of the system from a truncated model. The unresolved part of the truncated model is viewed as a model error in the DA system. In a second step, ML is used to emulate the unresolved part, a predictor of model error given the state of the system. Finally, the ML-based parametrisation model is added to the œphysical core truncated model to produce a hybrid model.

J. Brajard (Nansen Environmental and Remote Sensing Center)

Webinar video

RESSTE (November 18) & SFdS Risk day (November 20)

2020/11/18, 2020/11/20

Because two seminars are already organized in the French statistics community, OceaniX webinars are taking a break in November to come back in December with Julien Brajard’s talk on December 16. The two webinars above mentioned respectively take place in the framework of the RESSTE network (RESeau Statistiques pour données Spatio-TEmporelles) and the SFdS (Société Française de Statistiques). The two webinars will talk about Risk, Extreme and Climate, thus directly related to the first OceaniX Webinar. Inscription is free but mandatory regarding the SfDS meeting (https://www.sfds.asso.fr/fr/environnement_et_statistique/658-journee_risque/) while a zoom link is available for the RESSTE webinar (https://informatique-mia.inrae.fr/reseau-resste/extremes-climat), see below.

RESSTE & SfDS

Coupling machine learning approaches and rare event algorithms to compute extreme events in climate dynamics.

2020/10/21

Many key problems in climate dynamics require a huge computational effort. For instance, the study of extreme or rare events, the study of precursors of abrupt transitions to different climates, or the probabilistic prediction at the predictability margin, are three examples for which the computation of the relevant statistical quantities is impossible with reasonable computation resources, in comprehensive climate models. I will present several examples of new approaches we have developed, for instance using rare event algorithms and machine learning, for which we have solved these computational bottlenecks using concepts from statistical mechanics and dynamical systems The first application is the study of extreme heat waves using IPCC class climate models. For these models we have demonstrated a gain of several hundreds in the numerical cost for simulating extremely rare heat waves. We were able to compute return time plots for extreme heat waves with return times up to 10 000 years. Using the hundreds of simulated heat waves, we have exhibited global teleconnection pattern for extreme heat waves The second application is the computation of abrupt climate changes for Jupiter troposphere, and transitions to superrotating states for the Earth atmosphere dynamics. The third application is a preliminary work on a simple model of El Nino to illustrate prediction at the predictability margin (prediction of interannual variability, beyond the Lyapunov time scale).

F. Bouchet, ENS de Lyon and CNRS

Webinar video

Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations

2021/03/24 (1:30 PM, Paris Time)

We propose a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations (PDEs) on arbitrary complex-geometry domains. The proposed framework, named eXtended PINNs (XPINNs), further pushes the boundaries of both PINNs as well as conservative PINNs (cPINNs), which is a recently proposed domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. In each subdomain, a separate neural network is employed with optimally selected hyperparameters, e.g., depth/width of the network, number and location of residual points, activation function, optimization method, etc. A deep network can be employed in a subdomain with complex solutions, whereas a shallow neural network can be used in a subdomain with relatively simple and smooth solutions. We demonstrate the versatility of XPINN by solving both forward and inverse PDE problems, ranging from one-dimensional to three-dimensional problems, from time-dependent to time-independent problems, and from continuous to discontinuous problems, which clearly shows that the XPINN method is promising in many practical problems. The proposed XPINN method is the generalization of PINN and cPINN methods, both in terms of applicability as well as domain decomposition approach, which efficiently lends itself to parallelized computation.

Ameya Jagtap (Brown University)

Webinar video

Extreme events in fluid flows and water waves: prediction and statistical quantification

2021/04/21 (1:30 PM, Paris Time)

For many natural and engineering systems, extreme events, corresponding to large excursions, have significant consequences and are important to predict. Examples include extreme environmental events such rogue waves in the ocean, flooding events, climate transitions, as well as extreme events in engineering systems (unsteady flow separation, large ship motions, and dangerous structural loads). Therefore, predicting and understanding extreme events is an essential task for reliability assessment and design, characterization of operational capabilities, control and suppression of extreme transitions, just to mention a few. Despite their importance, understanding of extreme events for chaotic systems with intrinsically high-dimensional attractors has been a formidable problem, due to the stochastic, nonlinear, and essentially transient character of the underlying dynamics. Here we discuss two themes in contemporary, equation-assisted, data-driven modeling of dynamical systems related to extreme events, the prediction problem and the statistical quantification problem. For the first theme, a major challenge is the computation of low-energy patterns or signals, which systematically precede the occurrence of these extreme transient responses. We develop a variational framework for probing conditions that trigger intermittent extreme events in high-dimensional nonlinear dynamical systems. The algorithms exploit in a combined manner some physical properties of the chaotic attractor, as well as, finite-time stability properties of the governing equations. In the second part of the talk we develop a method for the evaluation of extreme event statistics associated with nonlinear dynamical systems from a small number of samples. From an initial dataset of design points, we formulate a sequential strategy that provides the next-best data point (set of parameters) that when evaluated results in improved estimates of the probability density function (pdf) for any scalar quantity of interest. The approach combines machine learning and optimization methods to determine the next-best design point that maximally reduces uncertainty between the estimated bounds of the pdf prediction. We assess the performance of the derived schemes through direct numerical simulations on realistic problems and we discuss limitations of other methods such as Large Deviations Theory. Applications are presented for many different areas, including prediction of extreme events in turbulent fluid flows and ocean waves, and probabilistic quantification of extreme events in fluid-structure interactions and ship motions.

Themistoklis Sapsis (MIT)

Webinar video

Physics-aware Interpretable Machine learning in the Earth sciences

2021/05/19 (1:30 PM, Paris Time)

Most problems in Earth sciences aim to do inferences about the system, where accurate predictions are just a tiny part of the whole problem. Inferences mean understanding variables relations, deriving models that are physically interpretable, that are simple parsimonious, and mathematically tractable. Machine learning models alone are excellent approximators, but very often do not respect the most elementary laws of physics, like mass or energy conservation, so consistency and confidence are compromised. I will review the main challenges ahead in the field, and introduce several ways to live in the Physics and machine learning interplay. Interpretable and physics-aware machine learning models are just a step towards understanding the data-generating process, for which causality promises great advances. I’ll review some recent methodologies to cope with it too. This is a collective long-term AI agenda towards developing and applying algorithms capable of discovering knowledge in the Earth system.

Gustau Camps-Valls (València University)

Webinar video

Deep Gaussian Markov Random Fields

2021/06/16 (1:30 PM, Paris Time)

Gaussian Markov random fields (GMRFs) are probabilistic graphical models widely used in spatial statistics and related fields to model dependencies over spatial structures. In this talk I present a formal connection between GMRFs and convolutional neural networks (CNNs). Common GMRFs are shown to be special cases of a generative model where the inverse mapping from data to latent variables is given by a 1-layer linear CNN. This connection allows us to generalize GMRFs to multi-layer CNN architectures, effectively increasing the order of the corresponding GMRF in a way which has favorable computational scaling. Furthermore, well-established tools, such as autodiff and variational inference, can be used for simple and efficient inference and learning of the deep GMRF. I demonstrate the flexibility of the proposed model and show that it outperforms the state-of-the-art on a dataset of satellite temperatures, in terms of prediction and predictive uncertainty.

Fredrik Lindsten (Linkoping University)

arxiv

Data Assimilation networks beats Ensemble Kalman filters

2021/09/15 (1:30 PM, Paris Time)

Data assimilation algorithms aim at forecasting the state of a dynamical system by combining a mathematical representation of the system with observations thereof. Exploring connections between typical data assimilation algorithms with recurrent Elman networks, we propose a fully data driven deep learning architecture that provably reaches the same prediction goals as Data assimilation algorithms. On numerical experiments based on the well-known Lorenz system, and when suitably trained using snapshots of the system trajectory (i.e., batches of state trajectories) and observations, our architecture successfully reconstructs both the system dynamics and the observation operator. It also outperforms a state-of-the-art data assimilation system on a prediction exercise

Serge Gratton (ENSEEIHT)

Causal inference in Earth system sciences

2021/10/20 (1:30 PM, Paris Time)

The heart of the scientific enterprise is a rational effort to understand the causes behind the phenomena we observe. In disciplines dealing with complex dynamical systems, such as the Earth system, replicated real experiments are rarely feasible. However, a rapidly increasing amount of observational and simulated data opens up the use of novel data-driven causal inference methods beyond the commonly adopted correlation techniques. In this talk I will present an overview of causal inference and identify key tasks and major challenges where causal methods have the potential to advance the state-of-the-art in Earth system sciences.

Jakob Runge (German Aerospace Center)

Webinar video

Integrating Deep Learning and Physics for Modeling Dynamical Systems

2021/12/11

The talk will present a short introduction to a recent topic concerning the use of Neural Networks and their integration with physics for modeling dynamical systems. The classical modeling tools for modeling dynamics in physics rely on differential equations. They are used for simulating complex phenomena in domains as diverse as climate, graphics, or aeronautics. The increasing availability of data makes it possible to consider hybridizing numerical differential equation solvers encoding prior physics knowledge and machine learning extracting complementary knowledge from the data. This is currently motivating a large amount of exploratory work from different communities. We will introduce some ideas and challenges underlying these new ideas.

Patrick Gallinari

Webinar video

Bayesian Learning of High-Dimensional Dynamical Models

2022/01/19

We review the results of our MSEAS group on the joint Bayesian and generative learning of state variables, parameters, parameterizations, constitutive relations, and differential equations of high-dimensional dynamical models. Using sparse observations, our rigorous PDE-based Bayesian learning framework combines dynamically orthogonal (DO) evolution equations for reduced dimension probabilistic evolution with the Gaussian mixture model DO filtering and smoothing for nonlinear reduced-dimension Bayesian inference. The Bayesian learning machines can discover new model functions or identify the correct and active parameterizations, while estimating the parameter values and state variable fields at the same time. Deep learning schemes that leverage our knowledge of dynamical constraints are also developed. Results are showcased for several ocean non-hydrostatic physical, biogeochemical, and ecosystem dynamics. This work is in collaboration with our MSEAS group at MIT.

Pierre Lermussiaux (MIT)

https://doi.org/10.1016/j.physd.2021.133003

Optimal Transport for Graph processing

2022/02/16

In this talk I will discuss how a variant of the classical optimal transport problem, known as the Gromov-Wasserstein distance, can help in designing learning tasks over graphs, and allow to transpose classical signal processing or data analysis tools such as dictionary learning or online change detection, for learning over those types of structured objects. Both theoretical and practical aspects will be discussed.

Nicolas Courty (Irisa)

Webinar video

Machine Learning for Weather and Predictions

2022/03/16

This talk provides an overview on the machine learning efforts at the European Centre for Medium-Range Weather Forecasts (ECMWF), and outlines how machine learning, and in particular deep learning, could help to improve weather predictions in the coming years. The talk will name challenges for the use of machine learning and suggest developments (research/software/hardware) that should enable the community of Earth system modelling to make quick progress.

Peter Dueben (ECMWF)

Hybrid Dynamical Systems: Augmenting Physics with Machine Learning

2022/04/20

This talk addresses the problem of combining Machine Learning and Model-Based approaches for complex dynamic forecasting. We especially tackle the situation where a prior physical knowledge formalized through ODE-PDE is available, but is incomplete in the sense that it cannot fully describe the observed dynamics. PhyDNet is a two-branch recurrent neural network, one branch is responsible for modeling the physical dynamics in the latent space while the other branch captures the complementary information required for accurate prediction. Experiments on various video datasets show the relevance of the approach. To study more in depth the cooperation between ML and MB models, I concentrate on the decomposition between both components. I introduce a principled learning framework, called APHYNITY which minimizes the norm of the data-driven complement under the constraint of perfect prediction of the augmented model. I provide a theoretical analysis of the decomposition and show that we can ensure existence and uniqueness decomposition guarantees, under mild conditions. I will present several applications where ODE-PDE dynamics can be augmented with APHYNITY to ensures better forecasting and parameter identification performances than MB or ML models alone, and that competing MB-ML hybrid methods.

Nicolas Thome (CNAM)

Webinar video

Representation learning for atmospheric data

2022/05/18

Representation learning, which has the objective to obtain a domain specific but task independent neural network model, has so far received little attention in scientific machine learning and in the Earth system sciences. In the talk, I will explain why spatio-temporal representation learning provides a promising direction for data-driven modeling of dynamical systems, in particular highly complex ones such as the atmosphere or the oceans. I will also discuss why transformers are a natural choice as the underlying neural network model. Results are presented for representation learning of ideal hyperbolic systems and fluid flows. For real atmospheric data, I will present how representation learning can lead to data-driven loss functions that improve the training for applications such as machine learned downscaling.

Christian Lessig (Otto von Guericke Universitat Magdeburg)

Webinar video

Filling the gaps in Ocean observation with BGC-Argo…and machine learning

2022/06/15

With the increasing use of new autonomous observation systems, we are progressively leaving an era of chronic under-sampling and observation of the ocean, especially in its biogeochemical dimension. For the open ocean, BGC-Argo is the bridgehead of these systems. It allows to start filling our observational gaps for a number of key variables. In parallel, the influx of data generated by BGC-Argo now offer the possibility to use them as predictors of biogeochemical variables that are not yet amenable to autonomous measurements (carbonate systems, some nutrients), through machine learning method developed thanks to high quality historical databases (e.g. Glodap V2). In addition, BGC-Argo brings a vertical dimension to ocean color satellite observations and the use of machine learning also allows the development of merging applications for three-dimensional reconstructions of some biogeochemical or optical fields (Chla, POC, light). Finally, and again thanks to the progressive densification of observations allowed by BGC-Argo, the first bioregionalizations of the global ocean can now encompass the entire water column and open perspectives for the study of the biological pump and carbon sequestration.

Hervé Claustre (Laboratoire d’Océanographie de Villefranche)

Webinar video

Multi-band optical imaging: from fusion to change detection

2022/09/21

Fusing two multi-band images of complementary spatial and spectral resolutions acquired at the same time instant aims at recovering an image of high spatial and high spectral resolutions. In a first part of this talk, we will formulate this task as an inverse problem which turns out to admit a closed-form solution under some generic assumptions. Then in the second part of this talk, we will address the problem of detecting changes between multi-band images of different resolutions acquired at different time instants. We will show that this challenging task can be generically and efficiently tackled from a fusion perspective.

Nicolas Dobigeon (INP-ENSEEIHT)

Robust and Explainable Classification using Optimal Transport and 1-Lipschitz Neural Networks

2023/03/15

The lack of robustness and explainability in neural networks is clearly related to the Lipschitz constant of deep models, whic can be arbitrarily high. It has been demonstrated that constraining the Lipschitz constant can improve these properties, but can make it more difficult to learn with classical loss functions. In this presentation, we will explain how to control this constant and show that learning these networks requires defining specific loss functions. We propose a loss function based on optimal transport, which provides certification of robustness and can turn adversarial functions into provable counterfactual examples.

Mathieu Serrurier (IRIT)

arXiv

Machine learning for acoustical oceanography: automatic source localization and environmental inversion using a single hydrophone

2023/11/15

Machine learning (ML), and more recently deep learning, have revolutionized computer science. However, the impact of ML on acoustical oceanography (AO) stays limited. This is largely due to two factors inherent to the ocean acoustics context. Large datasets with reliable annotations are usually not available, and the signal degradation due to propagation and noise is more severe than for other classical ML applications. In this talk, we will show how traditional AO problems can be revisited using ML. The presentation will cover both the forward problem (simulating the underwater sound propagation) and the inverse problem (source localization and environmental characterization). A common point between those test-cases is that they are solved using neural networks relying on training datasets that include both simulated and experimental marine data. The presentation will demonstrate how this enable to automate and accelerate their resolution. This enables processing big AO datasets, which opens the door for new oceanographic discoveries. Two practical examples on marine data will be presented. North Atlantic right whale monitoring in Cape Cod Bay, and characterization of the spatial variability of the seafloor of the New England mud patch.

Julien Bonnel (WHOI)

Webinar video

Incorporating Omics Knowledge Into Earth System Modeling

2024/01/17

Over the last two decades, Earth System science has faced the challenge of incorporating organisms and their metabolisms into a global biogeochemical context. Bridging this gap would open up new ways to integrate the enormous wealth of -omics data to address links between biodiversity, microbial activity, and ecosystem functions, e.g., related to interactions between biological processes and the climate system. However, the complexity of solving several hundred equations at each grid point on Earth has so far precluded significant advances. Thus, Earth System Models (ESMs) highly simplify their representation of biological processes, leading to major uncertainty in climate change impacts and how changing phytoplankton physiology affects the production of key metabolites. Here we embed a genome-scale model within a state-of-the-art ESM to deliver an integrated understanding of how gradients in resource stress modulate metabolic reactions and molecular physiology. In particular, we show how the production of two carbon storage compounds (lipids and glycogen) in the prevalent marine cyanobacteria Prochlorococcus is associated with different acclimation strategies in conditions of different substrate availability. Accounting for all metabolic capacities, this allows explicitly predicting where carbon storage is used or zones producing around 50 critical metabolites for carbon cycle (i.e., DMSP). It allows us to decipher « hot spots » for the carbon cycle associated with emblematic planktons. More generally, this new framework will stimulate the future development of a new generation of trait-based models that can comfortably incorporate the complexities of cellular physiology.

Damien Eveillard (Nantes University, LS2N)

Webinar video

Bridging Classical Data Assimilation and Optimal Transport

2024/02/21

Because optimal transport acts as displacement interpolation in physical space rather than as interpolation in value space, it can potentially avoid double penalty errors. As such it provides a very attractive metric for non-negative physical fields comparison, the Wasserstein distance, which could further be used in data assimilation for the geosciences. However, its theoretical formulation and implementation within typical data assimilation problems face conceptual challenges. We formulate the problem in a way that offers a unified view on both classical data assimilation and optimal transport. The resulting OTDA framework accounts for both the classical source of prior errors, background and observation, together with a Wasserstein barycentre in between the corresponding states. We show that the OTDA analysis can be decomposed as a simpler OTDA problem involving a single Wasserstein distance, followed by a Wasserstein barycentre problem which ignores the prior errors and can be seen as a McCann interpolant. We also propose a less enlightening but straightforward solution to the full OTDA problem, which includes the derivation of its analysis error covariance matrix. Thanks to these theoretical developments, we are able to extend the classical 3D-Var paradigm at the core of most classical data assimilation schemes. I will illustrate this talk with simple one– and two–dimensional examples that show the richness of the new types of analysis offered by this unification.

Marc Bocquet (ENPC)

Webinar video

 

List of coming Webinars