Boosting是一种著名的机器学习方法,它基于将弱和适度不准确假设与强烈而准确的假设相结合的想法。我们研究了弱假设属于界限能力类别的假设。这个假设的灵感来自共同的惯例,即虚弱的假设是“易于学习的类别”中的“人数规则”。 (Schapire和Freund〜 '12,Shalev-Shwartz和Ben-David '14。)正式,我们假设弱假设类别具有有界的VC维度。我们关注两个主要问题:(i)甲骨文的复杂性:产生准确的假设需要多少个弱假设?我们设计了一种新颖的增强算法,并证明它绕过了由Freund和Schapire('95,'12)的经典下限。虽然下限显示$ \ omega({1}/{\ gamma^2})$弱假设有时是必要的,而有时则需要使用$ \ gamma $ -margin,但我们的新方法仅需要$ \ tilde {o}({1})({1}) /{\ gamma})$弱假设,前提是它们属于一类有界的VC维度。与以前的增强算法以多数票汇总了弱假设的算法不同,新的增强算法使用了更复杂(“更深”)的聚合规则。我们通过表明复杂的聚合规则实际上是规避上述下限是必要的,从而补充了这一结果。 (ii)表现力:通过提高有限的VC类的弱假设可以学习哪些任务?可以学到“遥远”的复杂概念吗?为了回答第一个问题,我们{介绍组合几何参数,这些参数捕获增强的表现力。}作为推论,我们为认真的班级的第二个问题提供了肯定的答案,包括半空间和决策树桩。一路上,我们建立并利用差异理论的联系。
translated by 谷歌翻译
经典的算法adaboost允许转换一个弱学习者,这是一种算法,它产生的假设比机会略好,成为一个强大的学习者,在获得足够的培训数据时,任意高精度。我们提出了一种新的算法,该算法从弱学习者中构建了一个强大的学习者,但比Adaboost和所有其他弱者到强大的学习者使用训练数据少,以实现相同的概括界限。样本复杂性下限表明我们的新算法使用最小可能的训练数据,因此是最佳的。因此,这项工作解决了从弱学习者中构建强大学习者的经典问题的样本复杂性。
translated by 谷歌翻译
可实现和不可知性的可读性的等价性是学习理论的基本现象。与PAC学习和回归等古典设置范围的变种,近期趋势,如对冲强劲和私人学习,我们仍然缺乏统一理论;等同性的传统证据往往是不同的,并且依赖于强大的模型特异性假设,如统一的收敛和样本压缩。在这项工作中,我们给出了第一个独立的框架,解释了可实现和不可知性的可读性的等价性:三行黑箱减少简化,统一,并在各种各样的环境中扩展了我们的理解。这包括没有已知的学报的模型,例如学习任意分布假设或一般损失,以及许多其他流行的设置,例如强大的学习,部分学习,公平学习和统计查询模型。更一般地,我们认为可实现和不可知的学习的等价性实际上是我们调用属性概括的更广泛现象的特殊情况:可以满足有限的学习算法(例如\噪声公差,隐私,稳定性)的任何理想性质假设类(可能在某些变化中)延伸到任何学习的假设类。
translated by 谷歌翻译
A classical result in learning theory shows the equivalence of PAC learnability of binary hypothesis classes and the finiteness of VC dimension. Extending this to the multiclass setting was an open problem, which was settled in a recent breakthrough result characterizing multiclass PAC learnability via the DS dimension introduced earlier by Daniely and Shalev-Shwartz. In this work we consider list PAC learning where the goal is to output a list of $k$ predictions. List learning algorithms have been developed in several settings before and indeed, list learning played an important role in the recent characterization of multiclass learnability. In this work we ask: when is it possible to $k$-list learn a hypothesis class? We completely characterize $k$-list learnability in terms of a generalization of DS dimension that we call the $k$-DS dimension. Generalizing the recent characterization of multiclass learnability, we show that a hypothesis class is $k$-list learnable if and only if the $k$-DS dimension is finite.
translated by 谷歌翻译
我们为在测试时间内对对抗性示例进行了学习预测的问题,为学习预测的问题提供了最小的最佳学习者。有趣的是,我们发现这需要新的算法思想和方法来实现对抗性的学习。特别是,我们从强烈的负面意义上表明,蒙塔瑟(Montasser),Hanneke和Srebro(2019)提出的强大学习者的次级临时性以及我们确定为本地学习者的更广泛的学习者。我们的结果是通过通过关键技术贡献采用全球视角来实现的:可能具有独立利益的全球单包含图,它概括了由于Haussler,Littlestone和Warminguth引起的经典单包含图(1994年)(1994年) )。最后,作为副产品,我们确定了一个定性和定量表征哪些类别的预测因子$ \ mathcal {h} $的维度。由于Montasser等人,这解决了一个空旷的问题。 (2019年),并在固定稳健学习的样品复杂性上,在已建立的上限和下限之间结束了一个(潜在的)无限差距。
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
我们建立了量子算法设计与电路下限之间的第一一般连接。具体来说,让$ \ mathfrak {c} $是一类多项式大小概念,假设$ \ mathfrak {c} $可以在统一分布下的成员查询,错误$ 1/2 - \ gamma $通过时间$ t $量子算法。我们证明如果$ \ gamma ^ 2 \ cdot t \ ll 2 ^ n / n $,则$ \ mathsf {bqe} \ nsubseteq \ mathfrak {c} $,其中$ \ mathsf {bqe} = \ mathsf {bque} [2 ^ {o(n)}] $是$ \ mathsf {bqp} $的指数时间模拟。在$ \ gamma $和$ t $中,此结果是最佳的,因为它不难学习(经典)时间$ t = 2 ^ n $(没有错误) ,或在Quantum Time $ t = \ mathsf {poly}(n)$以傅立叶采样为单位为1/2美元(2 ^ { - n / 2})$。换句话说,即使对这些通用学习算法的边际改善也会导致复杂性理论的主要后果。我们的证明在学习理论,伪随机性和计算复杂性的几个作品上构建,并且至关重要地,在非凡的经典学习算法与由Oliveira和Santhanam建立的电路下限之间的联系(CCC 2017)。扩展他们对量子学习算法的方法,结果产生了重大挑战。为此,我们展示了伪随机发电机如何以通用方式意味着学习到较低的连接,构建针对均匀量子计算的第一个条件伪随机发生器,并扩展了Impagliazzo,JaiSwal的本地列表解码算法。 ,Kabanets和Wigderson(Sicomp 2010)通过微妙的分析到量子电路。我们认为,这些贡献是独立的兴趣,可能会发现其他申请。
translated by 谷歌翻译
我们在高斯分布下使用Massart噪声与Massart噪声进行PAC学习半个空间的问题。在Massart模型中,允许对手将每个点$ \ mathbf {x} $的标签与未知概率$ \ eta(\ mathbf {x})\ leq \ eta $,用于某些参数$ \ eta \ [0,1 / 2] $。目标是找到一个假设$ \ mathrm {opt} + \ epsilon $的错误分类错误,其中$ \ mathrm {opt} $是目标半空间的错误。此前已经在两个假设下研究了这个问题:(i)目标半空间是同质的(即,分离超平面通过原点),并且(ii)参数$ \ eta $严格小于$ 1/2 $。在此工作之前,当除去这些假设中的任何一个时,不知道非增长的界限。我们研究了一般问题并建立以下内容:对于$ \ eta <1/2 $,我们为一般半个空间提供了一个学习算法,采用样本和计算复杂度$ d ^ {o_ {\ eta}(\ log(1 / \ gamma) )))}} \ mathrm {poly}(1 / \ epsilon)$,其中$ \ gamma = \ max \ {\ epsilon,\ min \ {\ mathbf {pr} [f(\ mathbf {x})= 1], \ mathbf {pr} [f(\ mathbf {x})= -1] \} \} $是目标半空间$ f $的偏差。现有的高效算法只能处理$ \ gamma = 1/2 $的特殊情况。有趣的是,我们建立了$ d ^ {\ oomega(\ log(\ log(\ log(\ log))}}的质量匹配的下限,而是任何统计查询(SQ)算法的复杂性。对于$ \ eta = 1/2 $,我们为一般半空间提供了一个学习算法,具有样本和计算复杂度$ o_ \ epsilon(1)d ^ {o(\ log(1 / epsilon))} $。即使对于均匀半空间的子类,这个结果也是新的;均匀Massart半个空间的现有算法为$ \ eta = 1/2 $提供可持续的保证。我们与D ^ {\ omega(\ log(\ log(\ log(\ log(\ epsilon))} $的近似匹配的sq下限补充了我们的上限,这甚至可以为同类半空间的特殊情况而保持。
translated by 谷歌翻译
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests--based on a collection of losses and hypothesis class--a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature's true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.
translated by 谷歌翻译
A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.
translated by 谷歌翻译
我们考虑在对抗环境中的强大学习模型。学习者获得未腐败的培训数据,并访问可能受到测试期间对手影响的可能腐败。学习者的目标是建立一个强大的分类器,该分类器将在未来的对抗示例中进行测试。每个输入的对手仅限于$ k $可能的损坏。我们将学习者 - 对手互动建模为零和游戏。该模型与Schmidt等人的对抗示例模型密切相关。 (2018); Madry等。 (2017)。我们的主要结果包括对二进制和多类分类的概括界限,以及实现的情况(回归)。对于二元分类设置,我们都拧紧Feige等人的概括。 (2015年),也能够处理无限假设类别。样本复杂度从$ o(\ frac {1} {\ epsilon^4} \ log(\ frac {| h |} {\ delta})$ to $ o \ big(\ frac {1} { epsilon^2}(kvc(h)\ log^{\ frac {3} {2}+\ alpha}(kvc(h))+\ log(\ frac {1} {\ delta} {\ delta})\ big)\ big)\ big)$ for任何$ \ alpha> 0 $。此外,我们将算法和概括从二进制限制到多类和真实价值的案例。一路上,我们获得了脂肪震惊的尺寸和$ k $ fold的脂肪的尺寸和Rademacher复杂性的结果最大值的功能类别;这些可能具有独立的兴趣。对于二进制分类,Feige等人(2015年)使用遗憾的最小化算法和Erm Oracle作为黑匣子;我们适应了多类和回归设置。该算法为我们提供了给定培训样本中的球员的近乎最佳政策。
translated by 谷歌翻译
我们调查了布尔功能多任务函数多任务的计算效率,这些函数在$ d $二维的超立方体上通过大小$ k \ ll d $在所有任务中共享的功能表示相关。我们提供了一个多项式时间多任务学习算法,用于带有保证金$ \ gamma $的概念类别的概念类别,该算法基于同时增强技术,仅需要$ \ textrm {poly}(k/\ gamma)和$ \ textrm {poly}(k \ log(d)/\ gamma)$样本总共。此外,我们证明了一个计算分离,表明假设存在一个无法在属性效率模型中学习的概念类,我们可以构建另一个可以在属性效率模型中学到的概念类,但不能是多任务。有效学习的 - 多任务学习此概念类要么需要超级顺序的时间复杂性,要么需要更大的样本总数。
translated by 谷歌翻译
The one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth achieves an optimal in-expectation risk bound in the standard PAC classification setup. In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm. We refute this conjecture in the strongest sense: for any practically interesting Vapnik-Chervonenkis class, we provide an in-expectation optimal one-inclusion graph algorithm whose high probability risk bound cannot go beyond that implied by Markov's inequality. Our construction of these poorly performing one-inclusion graph algorithms uses Varshamov-Tenengolts error correcting codes. Our negative result has several implications. First, it shows that the same poor high-probability performance is inherited by several recent prediction strategies based on generalizations of the one-inclusion graph algorithm. Second, our analysis shows yet another statistical problem that enjoys an estimator that is provably optimal in expectation via a leave-one-out argument, but fails in the high-probability regime. This discrepancy occurs despite the boundedness of the binary loss for which arguments based on concentration inequalities often provide sharp high probability risk bounds.
translated by 谷歌翻译
我们考虑使用对抗鲁棒性学习的样本复杂性。对于此问题的大多数现有理论结果已经考虑了数据中不同类别在一起或重叠的设置。通过一些实际应用程序,我们认为,相比之下,存在具有完美精度和稳健性的分类器的分类器的良好分离的情况,并表明样品复杂性叙述了一个完全不同的故事。具体地,对于线性分类器,我们显示了大类分离的分布式,其中任何算法的预期鲁棒丢失至少是$ \ω(\ FRAC {D} {n})$,而最大边距算法已预期标准亏损$ o(\ frac {1} {n})$。这表明了通过现有技术不能获得的标准和鲁棒损耗中的间隙。另外,我们介绍了一种算法,给定鲁棒率半径远小于类之间的间隙的实例,给出了预期鲁棒损失的解决方案是$ O(\ FRAC {1} {n})$。这表明,对于非常好的数据,可实现$ O(\ FRAC {1} {n})$的收敛速度,否则就是这样。我们的结果适用于任何$ \ ell_p $ norm以$ p> 1 $(包括$ p = \ idty $)为稳健。
translated by 谷歌翻译
We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.
translated by 谷歌翻译
作为算法公平性的概念,多核算已被证明是一个强大而多才多艺的概念,其含义远远超出了其最初的意图。这个严格的概念 - 预测在丰富的相交子群中得到了很好的校准 - 以成本为代价提供了强大的保证:学习成型预测指标的计算和样本复杂性很高,并且随着类标签的数量而成倍增长。相比之下,可以更有效地实现多辅助性的放松概念,但是,仅假设单独使用多学历,就无法保证许多最可取的多核能概念。这种紧张局势提出了一个关键问题:我们能否以多核式式保证来学习预测因素,以与多审核级相称?在这项工作中,我们定义并启动了低度多核的研究。低度的多核净化定义了越来越强大的多组公平性概念的层次结构,这些概念跨越了多辅助性和极端的多核电的原始表述。我们的主要技术贡献表明,与公平性和准确性有关的多核算的关键特性实际上表现为低级性质。重要的是,我们表明,低度的数学振动可以比完整的多核电更有效。在多级设置中,实现低度多核的样品复杂性在完整的多核电上呈指数级(在类中)提高。我们的工作提供了令人信服的证据,表明低度多核能代表了一个最佳位置,将计算和样品效率配对,并提供了强大的公平性和准确性保证。
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
考虑到数据在几个方之间分配的学习任务,沟通是当事方希望最大程度地减少的基本资源之一。我们提出了一种分布式增强算法,该算法具有有限的噪声。我们的算法类似于经典的增强算法,尽管它配备了一种新组件,灵感来自Impagliazzo的硬核Lemma \ cite {Impagliazzo1995hard},并在算法中添加了健壮性质量。我们还通过证明对任何渐近上更大的噪声的弹性是无法通过沟通效率算法来实现的,从而补充了这一结果。
translated by 谷歌翻译
在这项工作中,我们调查了Steinke和Zakynthinou(2020)的“条件互信息”(CMI)框架的表现力,以及使用它来提供统一框架,用于在可实现的环境中证明泛化界限。我们首先证明可以使用该框架来表达任何用于从一类界限VC维度输出假设的任何学习算法的非琐碎(但是次优)界限。我们证明了CMI框架在用于学习半个空间的预期风险上产生最佳限制。该结果是我们的一般结果的应用,显示稳定的压缩方案Bousquet al。 (2020)尺寸$ k $有统一有限的命令$ o(k)$。我们进一步表明,适当学习VC类的固有限制与恒定的CMI存在适当的学习者的存在,并且它意味着对Steinke和Zakynthinou(2020)的开放问题的负面分辨率。我们进一步研究了价值最低限度(ERMS)的CMI的级别$ H $,并表明,如果才能使用有界CMI输出所有一致的分类器(版本空间),只有在$ H $具有有界的星号(Hanneke和杨(2015)))。此外,我们证明了一般性的减少,表明“休假”分析通过CMI框架表示。作为推论,我们研究了Haussler等人提出的一包图算法的CMI。 (1994)。更一般地说,我们表明CMI框架是通用的,因为对于每一项一致的算法和数据分布,当且仅当其评估的CMI具有样品的载位增长时,预期的风险就会消失。
translated by 谷歌翻译
我们研究了Massart噪声的PAC学习半圆的问题。给定标记的样本$(x,y)$从$ \ mathbb {r} ^ {d} ^ {d} \ times \ times \ {\ pm 1 \} $,这样的例子是任意的和标签$ y $ y $ y $ x $是由按萨塔特对手损坏的目标半空间与翻转概率$ \ eta(x)\ leq \ eta \ leq 1/2 $,目标是用小小的假设计算假设错误分类错误。这个问题的最佳已知$ \ mathrm {poly}(d,1 / \ epsilon)$时间算法实现$ \ eta + \ epsilon $的错误,这可能远离$ \ mathrm {opt} +的最佳界限\ epsilon $,$ \ mathrm {opt} = \ mathbf {e} _ {x \ sim d_x} [\ eta(x)] $。虽然已知实现$ \ mathrm {opt} + O(1)$误差需要超级多项式时间在统计查询模型中,但是在已知的上限和下限之间存在大的间隙。在这项工作中,我们基本上表征了统计查询(SQ)模型中Massart HalfSpaces的有效可读性。具体来说,我们表明,在$ \ mathbb {r} ^ d $中没有高效的sq算法用于学习massart halfpaces ^ d $可以比$ \ omega(\ eta)$更好地实现错误,即使$ \ mathrm {opt} = 2 ^ { - - \ log ^ {c}(d)$,适用于任何通用常量$ c \ in(0,1)$。此外,当噪声上限$ \ eta $接近$ 1/2 $时,我们的错误下限变为$ \ eta - o _ {\ eta}(1)$,其中$ o _ {\ eta}(1)$当$ \ eta $接近$ 1/2 $时,术语达到0美元。我们的结果提供了强有力的证据表明,大规模半空间的已知学习算法几乎是最可能的,从而解决学习理论中的长期开放问题。
translated by 谷歌翻译