# Session 3: *A Deep Learning Strategy for Solving Physics-based Bayesian Inference Problems*

## Presentation Type

Invited

## Student

No

## Track

Other

## Abstract

Inverse problems arise in numerous science and engineering applications, such as medical imaging, weather forecasting and predicting the spread of wildfires. Bayesian inference provides a principled approach to solve inverse problems by considering a statistical framework, which is particularly useful when the measurement/output of the forward problem is corrupted by noise. However, Bayesian inference algorithms can be challenging to implement when the inferred field is high-dimensional, or when the known prior information is too complex.

In this talk, we will see how conditional Wasserstein generative adversarial networks (cWGANs) can be designed to learn and sample from conditional distributions in Bayesian inference problems. The proposed approach modifies earlier variants of the architecture proposed by Adler et al. (2018) and Ray et al. (2022) in two fundamental ways: i) the gradient penalty term in the GAN loss makes use of gradients with respect to all input variables of the critic, and ii) once trained, samples are generated from the posterior by considering an open ball around the measurement. These two modifications are motivated by a convergence proof that ensures the learned conditional distribution weakly approximates the true conditional distribution governing the data. Through simple examples we show that this leads to a more robust training process. We also demonstrate that this approach can be used to solve complex real-world inverse problems.

## Start Date

2-6-2024 9:50 AM

## End Date

2-6-2024 10:50 AM

Session 3: *A Deep Learning Strategy for Solving Physics-based Bayesian Inference Problems*

Pasque 255

Inverse problems arise in numerous science and engineering applications, such as medical imaging, weather forecasting and predicting the spread of wildfires. Bayesian inference provides a principled approach to solve inverse problems by considering a statistical framework, which is particularly useful when the measurement/output of the forward problem is corrupted by noise. However, Bayesian inference algorithms can be challenging to implement when the inferred field is high-dimensional, or when the known prior information is too complex.

In this talk, we will see how conditional Wasserstein generative adversarial networks (cWGANs) can be designed to learn and sample from conditional distributions in Bayesian inference problems. The proposed approach modifies earlier variants of the architecture proposed by Adler et al. (2018) and Ray et al. (2022) in two fundamental ways: i) the gradient penalty term in the GAN loss makes use of gradients with respect to all input variables of the critic, and ii) once trained, samples are generated from the posterior by considering an open ball around the measurement. These two modifications are motivated by a convergence proof that ensures the learned conditional distribution weakly approximates the true conditional distribution governing the data. Through simple examples we show that this leads to a more robust training process. We also demonstrate that this approach can be used to solve complex real-world inverse problems.