论文标题

$ k $ neart的邻居的两阶段活跃的学习算法

A Two-Stage Active Learning Algorithm for $k$-Nearest Neighbors

论文作者

Rittler, Nick, Chaudhuri, Kamalika

论文摘要

$ k $ - 最初的邻居分类是一种流行的非参数方法,因为理想的属性(例如自动适应分配规模更改)。不幸的是,迄今为止,很难设计积极的学习策略来培训基于本地投票的分类器,这些分类器自然保留了这些理想的特性,因此,从文献中明显缺少了$ k $ neart最邻居分类的积极学习策略。在这项工作中,我们引入了一种简单而直观的主动学习算法,用于培训$ k $ neart的邻居分类器,这是文献中的第一个保留$ k $ neart neart nearbor票的概念。 We provide consistency guarantees for a modified $k$-nearest neighbors classifier trained on samples acquired via our scheme, and show that when the conditional probability function $\mathbb{P}(Y=y|X=x)$ is sufficiently smooth and the Tsybakov noise condition holds, our actively trained classifiers converge to the Bayes optimal classifier at a faster asymptotic rate than passively trained $ k $ - 最近的邻居分类器。

$k$-nearest neighbor classification is a popular non-parametric method because of desirable properties like automatic adaption to distributional scale changes. Unfortunately, it has thus far proved difficult to design active learning strategies for the training of local voting-based classifiers that naturally retain these desirable properties, and hence active learning strategies for $k$-nearest neighbor classification have been conspicuously missing from the literature. In this work, we introduce a simple and intuitive active learning algorithm for the training of $k$-nearest neighbor classifiers, the first in the literature which retains the concept of the $k$-nearest neighbor vote at prediction time. We provide consistency guarantees for a modified $k$-nearest neighbors classifier trained on samples acquired via our scheme, and show that when the conditional probability function $\mathbb{P}(Y=y|X=x)$ is sufficiently smooth and the Tsybakov noise condition holds, our actively trained classifiers converge to the Bayes optimal classifier at a faster asymptotic rate than passively trained $k$-nearest neighbor classifiers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源