论文标题
纯粹是贝叶斯反事实与纽科姆的悖论
Purely Bayesian counterfactuals versus Newcomb's paradox
论文作者
论文摘要
本文提出了实体的认知系统与其决策系统之间的仔细分离。至关重要的是,贝叶斯反事实是由认知系统估算的。不是通过决策系统。基于这一说法,我证明了类似Newcomb的问题的存在,而认知系统必须期望该实体做出反事实糟糕的决定。然后,我解决了Newcomb悖论的(轻微概括)。我解决了玩家认为预测变量使用贝叶斯规则的特定情况,并使用所有可用的数据提供了贝叶斯规则。我证明,1-box策略的反事实最优性取决于播放器在预测器的附加数据上的先验。如果预计这些附加数据不会充分减少预测因子对玩家决定的不确定性,那么玩家的认知系统将反映出2个盒子。但是,如果据信预测变量的数据使它们成为准杂项,则1个框将是反法行为的。然后讨论分析的含义。我认为,要更好地理解或设计一个实体,清楚地将实体的认识论,决定,以及数据收集,奖励和维护系统,无论实体是人类,算法还是制度是有用的。
This paper proposes a careful separation between an entity's epistemic system and their decision system. Crucially, Bayesian counterfactuals are estimated by the epistemic system; not by the decision system. Based on this remark, I prove the existence of Newcomb-like problems for which an epistemic system necessarily expects the entity to make a counterfactually bad decision. I then address (a slight generalization of) Newcomb's paradox. I solve the specific case where the player believes that the predictor applies Bayes rule with a supset of all the data available to the player. I prove that the counterfactual optimality of the 1-Box strategy depends on the player's prior on the predictor's additional data. If these additional data are not expected to reduce sufficiently the predictor's uncertainty on the player's decision, then the player's epistemic system will counterfactually prefer to 2-Box. But if the predictor's data is believed to make them quasi-omniscient, then 1-Box will be counterfactually preferred. Implications of the analysis are then discussed. More generally, I argue that, to better understand or design an entity, it is useful to clearly separate the entity's epistemic, decision, but also data collection, reward and maintenance systems, whether the entity is human, algorithmic or institutional.