The one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth achieves an optimal in-expectation risk bound in the standard PAC classification setup. In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm. We refute this conjecture in the strongest sense: for any practically interesting Vapnik-Chervonenkis class, we provide an in-expectation optimal one-inclusion graph algorithm whose high probability risk bound cannot go beyond that implied by Markov's inequality. Our construction of these poorly performing one-inclusion graph algorithms uses Varshamov-Tenengolts error correcting codes. Our negative result has several implications. First, it shows that the same poor high-probability performance is inherited by several recent prediction strategies based on generalizations of the one-inclusion graph algorithm. Second, our analysis shows yet another statistical problem that enjoys an estimator that is provably optimal in expectation via a leave-one-out argument, but fails in the high-probability regime. This discrepancy occurs despite the boundedness of the binary loss for which arguments based on concentration inequalities often provide sharp high probability risk bounds.
translated by 谷歌翻译
异常和异常值检测是机器学习中的长期问题。在某些情况下,异常检测容易,例如当从诸如高斯的良好特征的分布中抽出数据时。但是,当数据占据高维空间时,异常检测变得更加困难。我们呈现蛤蜊(聚类学习近似歧管),是任何度量空间中的歧管映射技术。 CLAM以快速分层聚类技术开始,然后根据使用多个几何和拓扑功能所选择的重叠群集,从群集树中引导图表。使用这些图形,我们实现了Chaoda(群集分层异常和异常值检测算法),探索了图形的各种属性及其组成集群以查找异常值。 Chaoda采用了一种基于培训数据集的转移学习形式,并将这些知识应用于不同基数,维度和域的单独测试集。在24个公开可用的数据集上,我们将Chaoda(按衡量ROC AUC)与各种最先进的无监督异常检测算法进行比较。六个数据集用于培训。 Chaoda优于16个剩余的18个数据集的其他方法。 CLAM和Chaoda规模大,高维“大数据”异常检测问题,并贯穿数据集和距离函数。克拉姆和Chaoda的源代码在github上自由地提供https://github.com/uri-abd/clam。
translated by 谷歌翻译