提升是一种合奏学习方法,它将弱者的学习者转换为PAC学习框架中的强大学习者。 Freund和Schapire设计了名为Adaboost的Godel Priad获奖算法,该算法可以促进学习者,从而输出二进制假设。最近,Arunachalam和Maity提供了第一个具有相似理论保证的量子增强算法。他们的算法,我们称之为Qadaboost,因此是adaboost的量子适应,仅适用于二元假设情况。就弱学习者的假设类别的VC维度而言,Qadaboost的四边形比Adaboost更快,但在弱学习者的偏见中多一级差。 Izdebski等。关于我们是否可以促进输出非二元假设的量子弱学习者提出了一个悬而未决的问题。在这项工作中,我们通过开发QRealBoost算法来解决这个开放的问题,该算法是由经典的室内启动算法激发的。主要的技术挑战是,鉴于量子子例程是嘈杂的和概率的,为融合,泛化界限和量子加速提供可证明的保证。我们证明,QRealBoost在Adaboost上保留了Qadaboost的二次加速度,并进一步实现了Qadaboost的多项式加速,从学习者的偏见和学习者为学习目标概念类别所花费的时间而言。最后,我们对QRealBoost进行了经验评估,并通过对QRealBoost对Qadaboost,Adaboost和Realboost的收敛性能进行基准对MNIST数据集和乳腺癌Wisconsin Dataset的子集进行基准收敛性能,从而对量子模拟器进行了经验评估。
translated by 谷歌翻译
经典的算法adaboost允许转换一个弱学习者,这是一种算法,它产生的假设比机会略好,成为一个强大的学习者,在获得足够的培训数据时,任意高精度。我们提出了一种新的算法,该算法从弱学习者中构建了一个强大的学习者,但比Adaboost和所有其他弱者到强大的学习者使用训练数据少,以实现相同的概括界限。样本复杂性下限表明我们的新算法使用最小可能的训练数据,因此是最佳的。因此,这项工作解决了从弱学习者中构建强大学习者的经典问题的样本复杂性。
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
我们建立了量子算法设计与电路下限之间的第一一般连接。具体来说,让$ \ mathfrak {c} $是一类多项式大小概念,假设$ \ mathfrak {c} $可以在统一分布下的成员查询,错误$ 1/2 - \ gamma $通过时间$ t $量子算法。我们证明如果$ \ gamma ^ 2 \ cdot t \ ll 2 ^ n / n $,则$ \ mathsf {bqe} \ nsubseteq \ mathfrak {c} $,其中$ \ mathsf {bqe} = \ mathsf {bque} [2 ^ {o(n)}] $是$ \ mathsf {bqp} $的指数时间模拟。在$ \ gamma $和$ t $中,此结果是最佳的,因为它不难学习(经典)时间$ t = 2 ^ n $(没有错误) ,或在Quantum Time $ t = \ mathsf {poly}(n)$以傅立叶采样为单位为1/2美元(2 ^ { - n / 2})$。换句话说,即使对这些通用学习算法的边际改善也会导致复杂性理论的主要后果。我们的证明在学习理论,伪随机性和计算复杂性的几个作品上构建,并且至关重要地,在非凡的经典学习算法与由Oliveira和Santhanam建立的电路下限之间的联系(CCC 2017)。扩展他们对量子学习算法的方法,结果产生了重大挑战。为此,我们展示了伪随机发电机如何以通用方式意味着学习到较低的连接,构建针对均匀量子计算的第一个条件伪随机发生器,并扩展了Impagliazzo,JaiSwal的本地列表解码算法。 ,Kabanets和Wigderson(Sicomp 2010)通过微妙的分析到量子电路。我们认为,这些贡献是独立的兴趣,可能会发现其他申请。
translated by 谷歌翻译
Boosting是一种著名的机器学习方法,它基于将弱和适度不准确假设与强烈而准确的假设相结合的想法。我们研究了弱假设属于界限能力类别的假设。这个假设的灵感来自共同的惯例,即虚弱的假设是“易于学习的类别”中的“人数规则”。 (Schapire和Freund〜 '12,Shalev-Shwartz和Ben-David '14。)正式,我们假设弱假设类别具有有界的VC维度。我们关注两个主要问题:(i)甲骨文的复杂性:产生准确的假设需要多少个弱假设?我们设计了一种新颖的增强算法,并证明它绕过了由Freund和Schapire('95,'12)的经典下限。虽然下限显示$ \ omega({1}/{\ gamma^2})$弱假设有时是必要的,而有时则需要使用$ \ gamma $ -margin,但我们的新方法仅需要$ \ tilde {o}({1})({1}) /{\ gamma})$弱假设,前提是它们属于一类有界的VC维度。与以前的增强算法以多数票汇总了弱假设的算法不同,新的增强算法使用了更复杂(“更深”)的聚合规则。我们通过表明复杂的聚合规则实际上是规避上述下限是必要的,从而补充了这一结果。 (ii)表现力:通过提高有限的VC类的弱假设可以学习哪些任务?可以学到“遥远”的复杂概念吗?为了回答第一个问题,我们{介绍组合几何参数,这些参数捕获增强的表现力。}作为推论,我们为认真的班级的第二个问题提供了肯定的答案,包括半空间和决策树桩。一路上,我们建立并利用差异理论的联系。
translated by 谷歌翻译
作为算法公平性的概念,多核算已被证明是一个强大而多才多艺的概念,其含义远远超出了其最初的意图。这个严格的概念 - 预测在丰富的相交子群中得到了很好的校准 - 以成本为代价提供了强大的保证:学习成型预测指标的计算和样本复杂性很高,并且随着类标签的数量而成倍增长。相比之下,可以更有效地实现多辅助性的放松概念,但是,仅假设单独使用多学历,就无法保证许多最可取的多核能概念。这种紧张局势提出了一个关键问题:我们能否以多核式式保证来学习预测因素,以与多审核级相称?在这项工作中,我们定义并启动了低度多核的研究。低度的多核净化定义了越来越强大的多组公平性概念的层次结构,这些概念跨越了多辅助性和极端的多核电的原始表述。我们的主要技术贡献表明,与公平性和准确性有关的多核算的关键特性实际上表现为低级性质。重要的是,我们表明,低度的数学振动可以比完整的多核电更有效。在多级设置中,实现低度多核的样品复杂性在完整的多核电上呈指数级(在类中)提高。我们的工作提供了令人信服的证据,表明低度多核能代表了一个最佳位置,将计算和样品效率配对,并提供了强大的公平性和准确性保证。
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
多集团不可知学习是一个正式的学习标准,涉及人口亚组内的预测因子的条件风险。标准解决了最近的实际问题,如亚组公平和隐藏分层。本文研究了对多组学习问题的解决方案的结构,为学习问题提供了简单和近最佳的算法。
translated by 谷歌翻译
我们使用对单个的,相同的$ d $维状态的相同副本进行的测量来研究量子断层扫描和阴影断层扫描的问题。我们首先因Haah等人而重新审视已知的下限。 (2017年)在痕量距离上具有准确性$ \ epsilon $的量子断层扫描,当测量选择与先前观察到的结果无关(即它们是非适应性的)时。我们简要地证明了这一结果。当学习者使用具有恒定结果数量的测量值时,这会导致更强的下限。特别是,这严格确定了民间传说的最佳性``Pauli phymography''算法的样本复杂性。我们还得出了$ \ omega(r^2 d/\ epsilon^2)$和$ \ omega(r^2 d/\ epsilon^2)的新颖界限( R^2 d^2/\ epsilon^2)$用于学习排名$ r $状态,分别使用任意和恒定的结果测量,在非适应性情况下。除了样本复杂性,对于学习量子的实际意义,是一种实际意义的资源状态是算法使用的不同测量值的数量。我们将下限扩展到学习者从固定的$ \ exp(o(d))$测量的情况下进行自适应测量的情况。这特别意味着适应性。没有使用可有效实现的单拷贝测量结果给我们任何优势。在目标是预测给定的可观察到给定序列的期望值的情况下,我们还获得了类似的界限,该任务被称为阴影层析成像。在适应性的情况下单拷贝测量可通过多项式大小的电路实现,我们证明了基于计算给定可观察物的样本平均值的直接策略是最佳的。
translated by 谷歌翻译
可实现和不可知性的可读性的等价性是学习理论的基本现象。与PAC学习和回归等古典设置范围的变种,近期趋势,如对冲强劲和私人学习,我们仍然缺乏统一理论;等同性的传统证据往往是不同的,并且依赖于强大的模型特异性假设,如统一的收敛和样本压缩。在这项工作中,我们给出了第一个独立的框架,解释了可实现和不可知性的可读性的等价性:三行黑箱减少简化,统一,并在各种各样的环境中扩展了我们的理解。这包括没有已知的学报的模型,例如学习任意分布假设或一般损失,以及许多其他流行的设置,例如强大的学习,部分学习,公平学习和统计查询模型。更一般地,我们认为可实现和不可知的学习的等价性实际上是我们调用属性概括的更广泛现象的特殊情况:可以满足有限的学习算法(例如\噪声公差,隐私,稳定性)的任何理想性质假设类(可能在某些变化中)延伸到任何学习的假设类。
translated by 谷歌翻译
我们调查了布尔功能多任务函数多任务的计算效率,这些函数在$ d $二维的超立方体上通过大小$ k \ ll d $在所有任务中共享的功能表示相关。我们提供了一个多项式时间多任务学习算法,用于带有保证金$ \ gamma $的概念类别的概念类别,该算法基于同时增强技术,仅需要$ \ textrm {poly}(k/\ gamma)和$ \ textrm {poly}(k \ log(d)/\ gamma)$样本总共。此外,我们证明了一个计算分离,表明假设存在一个无法在属性效率模型中学习的概念类,我们可以构建另一个可以在属性效率模型中学到的概念类,但不能是多任务。有效学习的 - 多任务学习此概念类要么需要超级顺序的时间复杂性,要么需要更大的样本总数。
translated by 谷歌翻译
A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.
translated by 谷歌翻译
我们为在测试时间内对对抗性示例进行了学习预测的问题,为学习预测的问题提供了最小的最佳学习者。有趣的是,我们发现这需要新的算法思想和方法来实现对抗性的学习。特别是,我们从强烈的负面意义上表明,蒙塔瑟(Montasser),Hanneke和Srebro(2019)提出的强大学习者的次级临时性以及我们确定为本地学习者的更广泛的学习者。我们的结果是通过通过关键技术贡献采用全球视角来实现的:可能具有独立利益的全球单包含图,它概括了由于Haussler,Littlestone和Warminguth引起的经典单包含图(1994年)(1994年) )。最后,作为副产品,我们确定了一个定性和定量表征哪些类别的预测因子$ \ mathcal {h} $的维度。由于Montasser等人,这解决了一个空旷的问题。 (2019年),并在固定稳健学习的样品复杂性上,在已建立的上限和下限之间结束了一个(潜在的)无限差距。
translated by 谷歌翻译
Determining the optimal sample complexity of PAC learning in the realizable setting was a central open problem in learning theory for decades. Finally, the seminal work by Hanneke (2016) gave an algorithm with a provably optimal sample complexity. His algorithm is based on a careful and structured sub-sampling of the training data and then returning a majority vote among hypotheses trained on each of the sub-samples. While being a very exciting theoretical result, it has not had much impact in practice, in part due to inefficiency, since it constructs a polynomial number of sub-samples of the training data, each of linear size. In this work, we prove the surprising result that the practical and classic heuristic bagging (a.k.a. bootstrap aggregation), due to Breimann (1996), is in fact also an optimal PAC learner. Bagging pre-dates Hanneke's algorithm by twenty years and is taught in most undergraduate machine learning courses. Moreover, we show that it only requires a logarithmic number of sub-samples to reach optimality.
translated by 谷歌翻译
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests--based on a collection of losses and hypothesis class--a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature's true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.
translated by 谷歌翻译
我们提出了两个关于量子计算机精确学习的新结果。首先,我们展示了如何从$ o(k ^ {1.5}(\ log k)^ 2)$统一量子示例的$ o(k ^ {1.5}(\ log k)^ 2)的$ k $ -fourier-sparse $ n $ -fourier-sparse $ n $ k $ -fourier-sparse $ n $ couber boolean函数。这改善了$ \ widetilde {\ theta}(kn)$统一的randuly \ emph {classical}示例(haviv和regev,ccc'15)。此外,我们提供了提高我们的$ \ widetilde {o}(k ^ {1.5})美元的可能方向,通过证明k $-$ -fourier-稀疏的布尔函数的改进,通过提高Chang的Lemma。其次,如果可以使用$ q $量子会员查询可以完全学习概念类$ \ mathcal {c} $,则也可以使用$ o o \ left(\ frac {q ^ 2} {\ logq} \ log | \ mathcal {c} | \右)$ \ emph {classical}会员查询。这通过$ \ log q $ -factor来改善最佳的仿真结果(Servedio和Gortler,Sicomp'04)。
translated by 谷歌翻译
套索和山脊是机器学习和统计数据中重要的最小化问题。它们是线性回归的版本,具有平方损耗,其中$ \ theta \ in \ mathbb {r}^d $ of系数的$ \ ell_1 $ -norm(对于lasso)或$ \ ell_2 $ norm(in $ \ ell_2 $ norm)(对于山脊)。我们研究了针对这些最小化问题的$ \ varepsilon $ - 二聚体的量子算法的复杂性。我们表明,对于拉索,我们可以通过加快弗兰克 - 沃尔夫算法的每题来获得$ d $的二次量子加速,而对于ridge来说,最好的量子算法是$ d $的线性,就像$ d $一样最好的古典算法。作为套索的量子下限的副产品,我们还证明了套索的第一个经典下限,该结构紧密地属于polyg因子。
translated by 谷歌翻译
公司跨行业对机器学习(ML)的快速传播采用了重大的监管挑战。一个这样的挑战就是可伸缩性:监管机构如何有效地审核这些ML模型,以确保它们是公平的?在本文中,我们启动基于查询的审计算法的研究,这些算法可以以查询有效的方式估算ML模型的人口统计学率。我们提出了一种最佳的确定性算法,以及具有可比保证的实用随机,甲骨文效率的算法。此外,我们进一步了解了随机活动公平估计算法的最佳查询复杂性。我们对主动公平估计的首次探索旨在将AI治理置于更坚定的理论基础上。
translated by 谷歌翻译
A classical result in learning theory shows the equivalence of PAC learnability of binary hypothesis classes and the finiteness of VC dimension. Extending this to the multiclass setting was an open problem, which was settled in a recent breakthrough result characterizing multiclass PAC learnability via the DS dimension introduced earlier by Daniely and Shalev-Shwartz. In this work we consider list PAC learning where the goal is to output a list of $k$ predictions. List learning algorithms have been developed in several settings before and indeed, list learning played an important role in the recent characterization of multiclass learnability. In this work we ask: when is it possible to $k$-list learn a hypothesis class? We completely characterize $k$-list learnability in terms of a generalization of DS dimension that we call the $k$-DS dimension. Generalizing the recent characterization of multiclass learnability, we show that a hypothesis class is $k$-list learnable if and only if the $k$-DS dimension is finite.
translated by 谷歌翻译
量子技术有可能彻底改变我们如何获取和处理实验数据以了解物理世界。一种实验设置,将来自物理系统的数据转换为稳定的量子存储器,以及使用量子计算机的数据的处理可以具有显着的优点,这些实验可以具有测量物理系统的传统实验,并且使用经典计算机处理结果。我们证明,在各种任务中,量子机器可以从指数较少的实验中学习而不是传统实验所需的实验。指数优势在预测物理系统的预测属性中,对噪声状态进行量子主成分分析,以及学习物理动态的近似模型。在一些任务中,实现指数优势所需的量子处理可能是适度的;例如,可以通过仅处理系统的两个副本来同时了解许多非信息可观察。我们表明,可以使用当今相对嘈杂的量子处理器实现大量超导QUBITS和1300个量子门的实验。我们的结果突出了量子技术如何能够实现强大的新策略来了解自然。
translated by 谷歌翻译