论文标题

基于因果张量的部分信息分解

A Partial Information Decomposition Based on Causal Tensors

论文作者

Sigtermans, David

论文摘要

我们根据因果张量新引入的框架,即多连线随机图,将部分信息分解提出,将源数据转换为目标数据。该框架使我们能够以构成的直接关联来表达间接关联。当使用诸如共同信息或转移熵之类的平均度量时,这是不可能的。由此,出现了冗余和独特信息的直观定义。拟议的冗余满足了威廉姆斯和啤酒引入的三个公理。对称性和自我还原性属性直接源于我们的定义。数据处理不平等可确保满足单调性公理。其他两个提议的公理还满足:身份特性和左单调公理。由于因果张量可以将彼此信息描述为转移熵,因此所提出的部分信息分解适用于这两种措施。结果表明,分解与另一种用后验中表示关联的方法的分解非常相似。此外,当我们对数据集的完整性或应包含在来源的内容的假设是不正确时,可能会出现负贡献。

We propose a partial information decomposition based on the newly introduced framework of causal tensors, i.e., multilinear stochastic maps that transform source data into destination data. This framework enables us to express an indirect association in terms of the constituting, direct associations. This is not possible when using average measures like mutual information or transfer entropy. From this, an intuitive definition of redundant and unique information arises. The proposed redundancy satisfies the three axioms stated by introduced by Williams and Beer. The symmetry and self-redundancy properties follow directly from our definition. The Data Processing Inequality ensures that the monotonicity axiom is satisfied. Additional, two other proposed axioms are satisfied: the identity property, and the left monotonicity axiom. Because causal tensors can describe both mutual information as transfer entropy, the proposed partial information decomposition applies to both measures. Results show that the decomposition closely resembles the decomposition of another approach that expresses associations in terms of mutual information a posteriori. It is furthermore demonstrated that negative contributions can arise when our assumptions about completeness of the data set, or what should be included as a source, are incorrect.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源