Computational Mathematics Workshop

Organisers: Jianfeng Lu (Duke), Gitta Kutyniok (LMU Munich), Steve Brunton (UW), Afonso Bandeira (ETH), Ben Peherstofer (NYU), Paris Perdikaris (UPenn)

Date: August 17th

The purpose of this workshop is to show case key contributions in the applied and computational mathematics community related to deep learning, with an emphasis on mathematical and foundational questions as well as applications to fundamental science.

ET CET GMT+8 Speaker
9:00am-9:40am 15:00-15:40 21:00-21:40 Andrew Stuart: Learning Solution Operators For PDEs
9:50am-10:30am 15:50-16:30 21:50-22:30 Sid Mishra: On Physical Informed Neural Networks (PINNs) for computing PDEs
10:30am-10:50am 16:30-16:50 22:30-22:50 Break/Gathertown
10:50am-11:30am 16:50-17:30 22:50-23:30 Gitta Kutyniok: The Impact of Artificial Intelligence on Parametric Partial Differential Equations
11:40am-12:20pm 17:40-18:20 23:40-00:20 Michael Brenner: Machine Learning for Partial Differential Equations
12:30pm-1pm 18:30-19:00 00:30-01:00 Round Table Discussion

Abstracts

Andrew Stuart (Caltech): Learning Solution Operators For PDEs

Neural networks have shown great success at learning function approximators between spaces X and Y, in the setting where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification); the neural networks learn the approximator from N data pairs {xn, yn}. In many problems arising in physics it is desirable to learn maps between spaces of functions X and Y; this may be either for the purposes of scientific discovery, or to provide cheap surrogate models which accelerate computations. New ideas are needed to successfully address this learning problem in a scalable, efficient manner.

In this talk I will describe recent work aimed at addressing the problem of learning operators which map between spaces of functions. I will describe methodological approaches that are being adopted; highlight basic theoretical results to support the approaches being developed; and describe numerical experiments concerned with learning the solution operators. In particular the numerical experiments will focus on mapping initial condition to solution at a positive time for Burgers equation, the Kuramoto-Sivashinsky equation and the 2D incompressible Navier-Stokes equation (Kolmogorov flows) as well as learning a homogenized model of crystal plasticity.

Sid Mishra (ETH): On Physical Informed Neural Networks (PINNs) for computing PDEs

We will describe PINNs and illustrate several examples for using PINNs for solving PDEs. Our aim would be to elucidate mechanisms that underpin the success of PINNs in approximating classical solutions to PDEs by deriving rigorous bounds on the resulting error. Examples of a variety of linear and nonlinear PDEs will be provided including PDEs in high dimensions and inverse problems for PDEs

Gitta Kutyniok (LMU): The Impact of Artificial Intelligence on Parametric Partial Differential Equations

High-dimensional parametric partial differential equations (PDEs) appear in various contexts including control and optimization problems, inverse problems, risk assessment, and uncertainty quantification. In most such scenarios the set of all admissible solutions associated with the parameter space is inherently low dimensional. This fact forms the foundation for the so-called reduced basis method. Recently, numerical experiments demonstrated the remarkable efficiency of using deep neural networks to solve parametric problems. In this talk, after an introduction into deep learning, we will present a theoretical justification for this class of approaches. More precisely, we will derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric PDEs. In fact, without any knowledge of its concrete shape, we use the inherent low-dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical approximation results. We use this low-dimensionality to guarantee the existence of a reduced basis. Then, for a large variety of parametric PDEs, we construct neural networks that yield approximations of the parametric maps not suffering from a curse of dimensionality and essentially only depending on the size of the reduced basis. Finally, we present a comprehensive numerical study of the effect of approximation-theoretical results for neural networks on practical learning problems in the context of parametric partial differential equations. These experiments strongly support the hypothesis that approximation-theoretical effects heavily influence the practical behavior of learning problems in numerical analysis. This is joint work with M. Geist, P. Petersen, M. Raslan, and R. Schneider

Michael Brenner (Harvard): Machine Learning for Partial Differential Equations

Our understanding and ability to compute the solutions to nonlinear partial differential equations has been strongly curtailed by our inability to effectively parameterize the inertial manifold of their solutions. I will discuss our ongoing efforts for using machine learning to advance the state of the art, both for developing a qualitative understanding of “turbulent” solutions and for efficient computational approaches. We aim to learn parameterizations of the solutions that give more insight into the dynamics and/or increase computational efficiency. I will touch on three topics: (i) using machine learning to develop models of the small scale behavior of spatio-temporal complex solutions, with the goal of maintaining accuracy albeit at a highly reduced computational cost relative to a full simulation. (ii) “larger scale” efforts to classify and understand patterns in nonlinear pdes, relating them to invariant (but unstable) solutions of the underlying equations.(iii) using these ideas to simplify and accelerate experimental measurements of complex fluid flows.