论文标题
经验贝叶斯的估计:$ g $ - 模型何时击败$ f $ - 在理论上(在实践中)?
Empirical Bayes estimation: When does $g$-modeling beat $f$-modeling in theory (and in practice)?
论文作者
论文摘要
经验贝叶斯(EB)是大规模推断的流行框架,旨在找到数据驱动的估计器,以与知道真实先验的贝叶斯甲骨文竞争。多年来,已经出现了两种原则性的EB估计方法:$ f $模型,该方法通过估计数据的边际分布和$ g $ - 模型来构建近似贝叶斯规则,该规则估算了数据,然后应用了学习的贝叶斯规则。对于Poisson模型,原型示例分别是著名的Robbins估计量和非参数MLE(NPMLE)。长期以来,人们一直认识到,罗宾斯估计量在概念上具有吸引力和计算简单的效果,但缺乏稳健性,并且很容易被``异常值''脱轨,这与NPMLE不同,NPMLE提供了更稳定,更容易拟合的贝叶斯形式。另一方面,现有理论不仅对此现象几乎没有启示,而且它们都指向相反的情况,因为最近两种方法在遗憾(超过贝叶斯风险)方面表现出了最佳的压实和亚指数先验。 在本文中,我们通过考虑具有有限的$ p> 1 $ th时刻的先验,以$ g $模型对重型数据的优势提供理论理由。我们表明,通过温和的正则化,任何$ g $模型的方法在密度估计中是最佳速率的最佳总遗憾$ \ tildetildeθ(n^{\ frac {3} {2p+1}}})$;特别是,NPMLE的特殊情况在没有正规化的情况下成功。相比之下,存在一个$ f $模型的估计器,其密度估计率是最佳的,但其EB的遗憾是多项式因素的次优。这些结果表明,适当的贝叶斯表格提供了``成功的一般食谱'',以用于适用于所有$ g $ - 模型(但不是$ f $ - 模型)方法的最佳EB估计。
Empirical Bayes (EB) is a popular framework for large-scale inference that aims to find data-driven estimators to compete with the Bayesian oracle that knows the true prior. Two principled approaches to EB estimation have emerged over the years: $f$-modeling, which constructs an approximate Bayes rule by estimating the marginal distribution of the data, and $g$-modeling, which estimates the prior from data and then applies the learned Bayes rule. For the Poisson model, the prototypical examples are the celebrated Robbins estimator and the nonparametric MLE (NPMLE), respectively. It has long been recognized in practice that the Robbins estimator, while being conceptually appealing and computationally simple, lacks robustness and can be easily derailed by ``outliers'', unlike the NPMLE which provides more stable and interpretable fit thanks to its Bayes form. On the other hand, not only do the existing theories shed little light on this phenomenon, but they all point to the opposite, as both methods have recently been shown optimal in terms of regret (excess over the Bayes risk) for compactly supported and subexponential priors. In this paper we provide a theoretical justification for the superiority of $g$-modeling over $f$-modeling for heavy-tailed data by considering priors with bounded $p>1$th moment. We show that with mild regularization, any $g$-modeling method that is Hellinger rate-optimal in density estimation achieves an optimal total regret $\tilde Θ(n^{\frac{3}{2p+1}})$; in particular, the special case of NPMLE succeeds without regularization. In contrast, there exists an $f$-modeling estimator whose density estimation rate is optimal but whose EB regret is suboptimal by a polynomial factor. These results show that the proper Bayes form provides a ``general recipe of success'' for optimal EB estimation that applies to all $g$-modeling (but not $f$-modeling) methods.