Applications of Deep Learning to Scientific Computing
dc.contributor.author
Molinaro, Roberto
dc.contributor.supervisor
Mishra, Siddhartha
dc.contributor.supervisor
Karniadakis, George
dc.contributor.supervisor
Perdikaris, Paris
dc.date.accessioned
2024-03-14T06:35:51Z
dc.date.available
2023-12-11T11:49:22Z
dc.date.available
2023-12-12T11:07:41Z
dc.date.available
2024-03-13T14:40:29Z
dc.date.available
2024-03-13T14:42:15Z
dc.date.available
2024-03-14T06:35:51Z
dc.date.issued
2023
dc.identifier.uri
http://hdl.handle.net/20.500.11850/646749
dc.identifier.doi
10.3929/ethz-b-000646749
dc.description.abstract
Physics-informed neural networks (PINNs) have been widely used for the robust and accurate approximation of partial differential equations. In the present thesis, we provide upper bounds on the generalization error of PINNs approximating solutions to the forward and inverse problems for PDEs. Specifically, we focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems. An abstract formalism is introduced, and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error in terms of the training error and the number of training samples. This abstract framework is illustrated with several examples of PDEs, and numerical examples validating the proposed theory are also presented. The derived estimates show two relevant facts: (1) PINNs require regularity of solutions to the underlying PDE to guarantee accurate approximation. Consequently, they may fail to approximate discontinuous solutions of PDEs, such as nonlinear hyperbolic equations. We then propose a novel variant of PINNs, termed weak PINNs (wPINNs), for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating the solution of a min-max optimization problem for a residual, defined in terms of Kruzhkov entropies, to determine parameters for the neural networks approximating the entropy solution as well as test functions. Moreover, (2) with a suitable quadrature rule, i.e., Monte Carlo quadrature, PINNs may potentially overcome the curse of dimensionality. Hence, we embrace physics-informed neural networks (PINNs) to solve the forward and inverse problems for a broad range of high-dimensional PDEs, including the radiative transfer equation and financial equations. We present a suite of numerical experiments demonstrating that PINNs provide very accurate solutions for both the forward and inverse problems at low computational cost without incurring the curse of dimensionality.
In the final part of the thesis, we transition to the operator learning framework and consider a class of inverse problems for PDEs that are only well-defined as mappings from operators to functions. Existing operator learning architectures map functions to functions and need to be modified to learn inverse maps from data. We propose a novel architecture termed Neural Inverse Operators (NIOs) to solve these PDE inverse problems. Motivated by the underlying mathematical structure, NIO is based on a suitable composition of DeepONets and FNOs to approximate mappings from operators to functions. A variety of experiments are presented to demonstrate that NIO significantly outperform baselines and solve PDE inverse problems robustly and accurately. Moreover, NIO is several orders of magnitude faster than existing direct and PDE-constrained optimization methods.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
ETH Zurich
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.subject
Machine learning
en_US
dc.subject
Computational science
en_US
dc.subject
Differential equations
en_US
dc.title
Applications of Deep Learning to Scientific Computing
en_US
dc.type
Doctoral Thesis
dc.rights.license
In Copyright - Non-Commercial Use Permitted
dc.date.published
2023-12-12
ethz.size
181 p.
en_US
ethz.code.ddc
DDC - DDC::5 - Science::510 - Mathematics
en_US
ethz.identifier.diss
29849
en_US
ethz.publication.place
Zurich
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02000 - Dep. Mathematik / Dep. of Mathematics::02501 - Seminar für Angewandte Mathematik / Seminar for Applied Mathematics::03851 - Mishra, Siddhartha / Mishra, Siddhartha
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02000 - Dep. Mathematik / Dep. of Mathematics::02501 - Seminar für Angewandte Mathematik / Seminar for Applied Mathematics::03851 - Mishra, Siddhartha / Mishra, Siddhartha
en_US
ethz.date.deposited
2023-12-11T11:49:23Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2023-12-12T11:07:43Z
ethz.rosetta.lastUpdated
2024-02-03T07:59:01Z
ethz.rosetta.exportRequired
true
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Applications%20of%20Deep%20Learning%20to%20Scientific%20Computing&rft.date=2023&rft.au=Molinaro,%20Roberto&rft.genre=unknown&rft.btitle=Applications%20of%20Deep%20Learning%20to%20Scientific%20Computing
Files in this item
Publication type
-
Doctoral Thesis [30299]