Open access
Autor(in)
Datum
2023Typ
- Doctoral Thesis
ETH Bibliographie
yes
Altmetrics
Abstract
Physics-informed neural networks (PINNs) have been widely used for the robust and accurate approximation of partial differential equations. In the present thesis, we provide upper bounds on the generalization error of PINNs approximating solutions to the forward and inverse problems for PDEs. Specifically, we focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems. An abstract formalism is introduced, and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error in terms of the training error and the number of training samples. This abstract framework is illustrated with several examples of PDEs, and numerical examples validating the proposed theory are also presented. The derived estimates show two relevant facts: (1) PINNs require regularity of solutions to the underlying PDE to guarantee accurate approximation. Consequently, they may fail to approximate discontinuous solutions of PDEs, such as nonlinear hyperbolic equations. We then propose a novel variant of PINNs, termed weak PINNs (wPINNs), for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating the solution of a min-max optimization problem for a residual, defined in terms of Kruzhkov entropies, to determine parameters for the neural networks approximating the entropy solution as well as test functions. Moreover, (2) with a suitable quadrature rule, i.e., Monte Carlo quadrature, PINNs may potentially overcome the curse of dimensionality. Hence, we embrace physics-informed neural networks (PINNs) to solve the forward and inverse problems for a broad range of high-dimensional PDEs, including the radiative transfer equation and financial equations. We present a suite of numerical experiments demonstrating that PINNs provide very accurate solutions for both the forward and inverse problems at low computational cost without incurring the curse of dimensionality.
In the final part of the thesis, we transition to the operator learning framework and consider a class of inverse problems for PDEs that are only well-defined as mappings from operators to functions. Existing operator learning architectures map functions to functions and need to be modified to learn inverse maps from data. We propose a novel architecture termed Neural Inverse Operators (NIOs) to solve these PDE inverse problems. Motivated by the underlying mathematical structure, NIO is based on a suitable composition of DeepONets and FNOs to approximate mappings from operators to functions. A variety of experiments are presented to demonstrate that NIO significantly outperform baselines and solve PDE inverse problems robustly and accurately. Moreover, NIO is several orders of magnitude faster than existing direct and PDE-constrained optimization methods. Mehr anzeigen
Persistenter Link
https://doi.org/10.3929/ethz-b-000646749Publikationsstatus
publishedExterne Links
Printexemplar via ETH-Bibliothek suchen
Verlag
ETH ZurichThema
Machine learning; Computational science; Differential equationsOrganisationseinheit
03851 - Mishra, Siddhartha / Mishra, Siddhartha
ETH Bibliographie
yes
Altmetrics