论文标题

通过乐观的线性分离

Linear Separation via Optimism

论文作者

Hanashiro, Rafael, Abernethy, Jacob

论文摘要

自机器学习文献的早期以来,已经探索了二进制线性分类。也许最古典的算法是感知算法,其中使用用于对示例进行分类的权重矢量,并在发现不正确的示例中进行添加剂更新。几十年来,人们已经对感知者进行了详尽的研究,并提出了几种版本。关于感知者的关键理论事实是,只要存在一个完美的线性分类器,其中有一些保证金$γ> 0 $,就可以找到这样的完美线性分离器的所需更新数量,受$ \ frac {1}} {γ^^2} $界定。从未完全解决的问题是:是否存在可以通过更少更新来实现此目的的算法?在本文中,我们以肯定的方式回答了这一点:我们提出了乐观的感知算法,这是一个简单的过程,它在不超过$ \ frac {1}γ$更新中找到一个分离的超平面。我们还通过实验表明,此过程可以显着胜过感知。

Binary linear classification has been explored since the very early days of the machine learning literature. Perhaps the most classical algorithm is the Perceptron, where a weight vector used to classify examples is maintained, and additive updates are made as incorrect examples are discovered. The Perceptron has been thoroughly studied and several versions have been proposed over many decades. The key theoretical fact about the Perceptron is that, so long as a perfect linear classifier exists with some margin $γ> 0$, the number of required updates to find such a perfect linear separator is bounded by $\frac{1}{γ^2}$. What has never been fully addressed is: does there exist an algorithm that can achieve this with fewer updates? In this paper we answer this in the affirmative: we propose the Optimistic Perceptron algorithm, a simple procedure that finds a separating hyperplane in no more than $\frac{1}γ$ updates. We also show experimentally that this procedure can significantly outperform Perceptron.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源