论文标题

从$ k $ -DPP取样而不查看所有项目

Sampling from a $k$-DPP without looking at all items

论文作者

Calandriello, Daniele, Dereziński, Michał, Valko, Michal

论文摘要

确定点过程(DPPS)是从大量项目中选择一个小的子集的有用概率模型,并在摘要,随机优化,主动学习等中进行了应用。鉴于内核功能和子集尺寸$ k $,我们的目标是对$ n $项目进行采样$ k $,概率与子集诱导的内核矩阵的决定因素成正比(又称$ k $ -DPP)。现有的$ K $ -DPP采样算法需要一个昂贵的预处理步骤,该步骤涉及对所有$ n $项目的多次通行证,这对于大型数据集而言是不可行的。解决此问题的幼稚启发式方法是将数据的一部分均匀地为样本,并仅对这些项目进行$ K $ -DPP采样,但是该方法不能保证所产生的样品甚至会在原始数据集中近似类似于目标分布。在本文中,我们开发了一种算法,该算法可自适应地构建一个足够大的均匀数据样本,然后将其用于有效地生成较小的$ k $项目,同时确保从所有$ n $项目上定义的目标分布中精确得出此集合。我们从经验上表明,我们的算法仅观察到所有元素的一小部分后,产生了$ K $ -DPP样本,与最先进的ART相比,几个数量级的性能更快。

Determinantal point processes (DPPs) are a useful probabilistic model for selecting a small diverse subset out of a large collection of items, with applications in summarization, stochastic optimization, active learning and more. Given a kernel function and a subset size $k$, our goal is to sample $k$ out of $n$ items with probability proportional to the determinant of the kernel matrix induced by the subset (a.k.a. $k$-DPP). Existing $k$-DPP sampling algorithms require an expensive preprocessing step which involves multiple passes over all $n$ items, making it infeasible for large datasets. A naïve heuristic addressing this problem is to uniformly subsample a fraction of the data and perform $k$-DPP sampling only on those items, however this method offers no guarantee that the produced sample will even approximately resemble the target distribution over the original dataset. In this paper, we develop an algorithm which adaptively builds a sufficiently large uniform sample of data that is then used to efficiently generate a smaller set of $k$ items, while ensuring that this set is drawn exactly from the target distribution defined on all $n$ items. We show empirically that our algorithm produces a $k$-DPP sample after observing only a small fraction of all elements, leading to several orders of magnitude faster performance compared to the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源