论文标题
学习命题模型计数的分支启发式学
Learning Branching Heuristics for Propositional Model Counting
论文作者
论文摘要
命题模型计数(或#sat)是计算布尔公式令人满意分配数量的问题。来自不同应用领域的许多问题,包括许多离散的概率推理问题,可以将#SAT求解器解决的模型计数问题转化为模型计数问题。但是,确切的#SAT求解器通常无法扩展到工业规模实例。在本文中,我们提出了Neuro#,这是一种学习分支启发式方法,以提高#SAT求解器在给定的问题家族中的实例上的性能。我们通过实验表明,我们的方法减少了类似分布的持有实例的步骤,并将其推广到同一问题家族的更大实例。它能够在许多不同的问题家族上实现这些结果。除了步骤计数改进外,Neuro#还可以在某些问题家族的较大实例上在较大的实例上实现壁式锁定速度的订单,尽管该范围的运行时间开销了。
Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.