论文标题
贝叶斯推断的深bootstrap
Deep Bootstrap for Bayesian Inference
论文作者
论文摘要
对于贝叶斯人来说,定义可能性的任务可能与定义先验的任务一样令人困惑。我们专注于从可能性参数从可能性中解放出来的参数并通过损失函数直接链接到数据的情况。我们调查了有关贝叶斯参数推断的现有工作,以及贝叶斯的非参数推断。然后,我们重点介绍了最近的自举计算方法,用于近似损失驱动的后代。特别是,我们专注于通过基础推动映射定义的隐式引导分布。我们从近似后代的IID采样器中调查了通过随机的引导权重槽槽训练的生成网络。在训练了深度学习映射之后,这种IID采样器的模拟成本可以忽略不计。我们将这些深度引导采样器的性能与精确的引导程序以及MCMC进行了比较(包括支持向量机或分位数回归)。我们还通过借鉴与模型错误指定的连接来提供对自举后代的理论见解。
For a Bayesian, the task to define the likelihood can be as perplexing as the task to define the prior. We focus on situations when the parameter of interest has been emancipated from the likelihood and is linked to data directly through a loss function. We survey existing work on both Bayesian parametric inference with Gibbs posteriors as well as Bayesian non-parametric inference. We then highlight recent bootstrap computational approaches to approximating loss-driven posteriors. In particular, we focus on implicit bootstrap distributions defined through an underlying push-forward mapping. We investigate iid samplers from approximate posteriors that pass random bootstrap weights trough a trained generative network. After training the deep-learning mapping, the simulation cost of such iid samplers is negligible. We compare the performance of these deep bootstrap samplers with exact bootstrap as well as MCMC on several examples (including support vector machines or quantile regression). We also provide theoretical insights into bootstrap posteriors by drawing upon connections to model mis-specification.