论文标题
概率的信息:有限de Finetti定理的另一种信息理论证明
Information in probability: Another information-theoretic proof of a finite de Finetti theorem
论文作者
论文摘要
我们回想起信息理论方法的一些历史来推导核心概率理论,并指出了最近在该领域兴趣的一部分,而目前沿着几个有趣的方向进行了当前的进展。然后,我们提供了De Finetti的经典表示定理的有限版本的新信息理论证明,以进行有限值的随机变量。我们在$ n $可交换的随机变量的顺序中,在第一个$ k $之间的分布与产品分布的适当混合物之间获得了相对熵的上限。混合度量的特征是原始序列的经验度量的定律,并将de Finetti的结果恢复为推论。与统计力学相关的吉布斯调节原则的良好动机是很好的,它遵循了一个吸引人的步骤。这些步骤所需的技术估计是通过使用信息理论中称为“类型方法”中的组合工具的集合来获得的。
We recall some of the history of the information-theoretic approach to deriving core results in probability theory and indicate parts of the recent resurgence of interest in this area with current progress along several interesting directions. Then we give a new information-theoretic proof of a finite version of de Finetti's classical representation theorem for finite-valued random variables. We derive an upper bound on the relative entropy between the distribution of the first $k$ in a sequence of $n$ exchangeable random variables, and an appropriate mixture over product distributions. The mixing measure is characterised as the law of the empirical measure of the original sequence, and de Finetti's result is recovered as a corollary. The proof is nicely motivated by the Gibbs conditioning principle in connection with statistical mechanics, and it follows along an appealing sequence of steps. The technical estimates required for these steps are obtained via the use of a collection of combinatorial tools known within information theory as `the method of types.'