我们研究了清单可解放的平均估计问题,而对手可能会破坏大多数数据集。具体来说,我们在$ \ mathbb {r} ^ $和参数$ 0 <\ alpha <\ frac 1 2 $中给出了一个$ $ n $ points的$ t $ points。$ \ alpha $ -flaction的点$ t $是iid来自乖巧的分发$ \ Mathcal {D} $的样本,剩余的$(1- \ alpha)$ - 分数是任意的。目标是输出小型的vectors列表,其中至少一个接近$ \ mathcal {d} $的均值。我们开发新的算法,用于列出可解码的平均值估计,实现几乎最佳的统计保证,运行时间$ O(n ^ {1 + \ epsilon_0} d)$,适用于任何固定$ \ epsilon_0> 0 $。所有先前的此问题算法都有额外的多项式因素在$ \ frac 1 \ alpha $。我们与额外技术一起利用此结果,以获得用于聚类混合物的第一个近几个线性时间算法,用于分开的良好表现良好的分布,几乎匹配谱方法的统计保证。先前的聚类算法本身依赖于$ k $ -pca的应用程序,从而产生$ \ omega(n d k)$的运行时。这标志着近二十年来这个基本统计问题的第一次运行时间改进。我们的方法的起点是基于单次矩阵乘法权重激发电位减少的$ \ Alpha \至1 $制度中的新颖和更简单的近线性时间较强的估计算法。在Diakonikolas等人的迭代多滤波技术的背景下,我们迫切地利用了这种新的算法框架。 '18,'20,提供一种使用一维投影的同时群集和下群点的方法 - 因此,绕过先前算法所需的$ k $ -pca子程序。
translated by 谷歌翻译
我们给出了\ emph {list-codobable协方差估计}的第一个多项式时间算法。对于任何$ \ alpha> 0 $,我们的算法获取输入样本$ y \ subseteq \ subseteq \ mathbb {r}^d $ size $ n \ geq d^{\ mathsf {poly}(1/\ alpha)} $获得通过对抗损坏I.I.D的$(1- \ alpha)n $点。从高斯分布中的样本$ x $ size $ n $,其未知平均值$ \ mu _*$和协方差$ \ sigma _*$。在$ n^{\ mathsf {poly}(1/\ alpha)} $ time中,它输出$ k = k(\ alpha)=(1/\ alpha)^{\ mathsf {poly}的常数大小列表(1/\ alpha)} $候选参数,具有高概率,包含$(\ hat {\ mu},\ hat {\ sigma})$,使得总变化距离$ tv(\ Mathcal {n}(n})(n}(n})( \ mu _*,\ sigma _*),\ Mathcal {n}(\ hat {\ mu},\ hat {\ sigma}))<1-o _ {\ alpha}(1)$。这是距离的统计上最强的概念,意味着具有独立尺寸误差的参数的乘法光谱和相对Frobenius距离近似。我们的算法更普遍地适用于$(1- \ alpha)$ - 任何具有低度平方总和证书的分布$ d $的损坏,这是两个自然分析属性的:1)一维边际和抗浓度2)2度多项式的超收缩率。在我们工作之前,估计可定性设置的协方差的唯一已知结果是针对Karmarkar,Klivans和Kothari(2019),Raghavendra和Yau(2019和2019和2019和2019和2019年)的特殊情况。 2020年)和巴克西(Bakshi)和科塔里(Kothari)(2020年)。这些结果需要超级物理时间,以在基础维度中获得任何子构误差。我们的结果意味着第一个多项式\ emph {extcect}算法,用于列表可解码的线性回归和子空间恢复,尤其允许获得$ 2^{ - \ Mathsf { - \ Mathsf {poly}(d)} $多项式时间错误。我们的结果还意味着改进了用于聚类非球体混合物的算法。
translated by 谷歌翻译
Robust mean estimation is one of the most important problems in statistics: given a set of samples in $\mathbb{R}^d$ where an $\alpha$ fraction are drawn from some distribution $D$ and the rest are adversarially corrupted, we aim to estimate the mean of $D$. A surge of recent research interest has been focusing on the list-decodable setting where $\alpha \in (0, \frac12]$, and the goal is to output a finite number of estimates among which at least one approximates the target mean. In this paper, we consider that the underlying distribution $D$ is Gaussian with $k$-sparse mean. Our main contribution is the first polynomial-time algorithm that enjoys sample complexity $O\big(\mathrm{poly}(k, \log d)\big)$, i.e. poly-logarithmic in the dimension. One of our core algorithmic ingredients is using low-degree sparse polynomials to filter outliers, which may find more applications.
translated by 谷歌翻译
我们考虑了在高维度中平均分离的高斯聚类混合物的问题。我们是从$ k $身份协方差高斯的混合物提供的样本,使任何两对手段之间的最小成对距离至少为$ \ delta $,对于某些参数$ \ delta> 0 $,目标是恢复这些样本的地面真相聚类。它是分离$ \ delta = \ theta(\ sqrt {\ log k})$既有必要且足以理解恢复良好的聚类。但是,实现这种担保的估计值效率低下。我们提供了在多项式时间内运行的第一算法,几乎符合此保证。更确切地说,我们给出了一种算法,它需要多项式许多样本和时间,并且可以成功恢复良好的聚类,只要分离为$ \ delta = \ oomega(\ log ^ {1/2 + c} k)$ ,任何$ c> 0 $。以前,当分离以k $的分离和可以容忍$ \ textsf {poly}(\ log k)$分离所需的quasi arynomial时间时,才知道该问题的多项式时间算法。我们还将我们的结果扩展到分布的分布式的混合物,该分布在额外的温和假设下满足Poincar \ {e}不等式的分布。我们认为我们相信的主要技术工具是一种新颖的方式,可以隐含地代表和估计分配的​​高度时刻,这使我们能够明确地提取关于高度时刻的重要信息而没有明确地缩小全瞬间张量。
translated by 谷歌翻译
我们研究了在存在$ \ epsilon $ - 对抗异常值的高维稀疏平均值估计的问题。先前的工作为此任务获得了该任务的样本和计算有效算法,用于辅助性Subgaussian分布。在这项工作中,我们开发了第一个有效的算法,用于强大的稀疏平均值估计,而没有对协方差的先验知识。对于$ \ Mathbb r^d $上的分布,带有“认证有限”的$ t $ tum-矩和足够轻的尾巴,我们的算法达到了$ o(\ epsilon^{1-1/t})$带有样品复杂性$的错误(\ epsilon^{1-1/t}) m =(k \ log(d))^{o(t)}/\ epsilon^{2-2/t} $。对于高斯分布的特殊情况,我们的算法达到了$ \ tilde o(\ epsilon)$的接近最佳错误,带有样品复杂性$ m = o(k^4 \ mathrm {polylog}(d)(d))/\ epsilon^^ 2 $。我们的算法遵循基于方形的总和,对算法方法的证明。我们通过统计查询和低度多项式测试的下限来补充上限,提供了证据,表明我们算法实现的样本时间 - 错误权衡在质量上是最好的。
translated by 谷歌翻译
In this work, we give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models with optimal dependence on the dimension in the sample complexity. In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error using $\widetilde{O}(d^2 \log \kappa)$ samples while tolerating a constant fraction of adversarial outliers. Here, $\kappa$ is the condition number of the target covariance matrix. The sample bound matches best non-private estimators in the dependence on the dimension (up to a polylogarithmic factor). We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight. Prior to our work, only identifiability results (yielding inefficient super-polynomial time algorithms) were known for the problem. In the approximate DP setting, we give an efficient algorithm to estimate an unknown Gaussian distribution up to an arbitrarily tiny total variation error using $\widetilde{O}(d^2)$ samples while tolerating a constant fraction of adversarial outliers. Prior to our work, all efficient approximate DP algorithms incurred a super-quadratic sample cost or were not outlier-robust. For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $\widetilde O(d)$, improving on a $\widetilde O(d^{1.5})$ bound from prior work. Our pure DP algorithm relies on a recursive private preconditioning subroutine that utilizes the recent work on private mean estimation [Hopkins et al., 2022]. Our approximate DP algorithms are based on a substantial upgrade of the method of stabilizing convex relaxations introduced in [Kothari et al., 2022].
translated by 谷歌翻译
我们研究了用于线性回归的主动采样算法,该算法仅旨在查询目标向量$ b \ in \ mathbb {r} ^ n $的少量条目,并将近最低限度输出到$ \ min_ {x \ In \ mathbb {r} ^ d} \ | ax-b \ | $,其中$ a \ in \ mathbb {r} ^ {n \ times d} $是一个设计矩阵和$ \ | \ cdot \ | $是一些损失函数。对于$ \ ell_p $ norm回归的任何$ 0 <p <\ idty $,我们提供了一种基于Lewis权重采样的算法,其使用只需$ \ tilde {o}输出$(1+ \ epsilon)$近似解决方案(d ^ {\ max(1,{p / 2})} / \ mathrm {poly}(\ epsilon))$查询到$ b $。我们表明,这一依赖于$ D $是最佳的,直到对数因素。我们的结果解决了陈和Derezi的最近开放问题,陈和Derezi \'{n} Ski,他们为$ \ ell_1 $ norm提供了附近的最佳界限,以及$ p \中的$ \ ell_p $回归的次优界限(1,2) $。我们还提供了$ O的第一个总灵敏度上限(D ^ {\ max \ {1,p / 2 \} \ log ^ 2 n)$以满足最多的$ p $多项式增长。这改善了Tukan,Maalouf和Feldman的最新结果。通过将此与我们的技术组合起来的$ \ ell_p $回归结果,我们获得了一个使$ \ tilde o的活动回归算法(d ^ {1+ \ max \ {1,p / 2 \}} / \ mathrm {poly}。 (\ epsilon))$疑问,回答陈和德里兹的另一个打开问题{n}滑雪。对于Huber损失的重要特殊情况,我们进一步改善了我们对$ \ tilde o的主动样本复杂性的绑定(d ^ {(1+ \ sqrt2)/ 2} / \ epsilon ^ c)$和非活跃$ \ tilde o的样本复杂性(d ^ {4-2 \ sqrt 2} / \ epsilon ^ c)$,由于克拉克森和伍德拉夫而改善了Huber回归的以前的D ^ 4 $。我们的敏感性界限具有进一步的影响,使用灵敏度采样改善了各种先前的结果,包括orlicz规范子空间嵌入和鲁棒子空间近似。最后,我们的主动采样结果为每种$ \ ell_p $ norm提供的第一个Sublinear时间算法。
translated by 谷歌翻译
We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics. We give the first black-box reduction from privacy to robustness which can produce private estimators with optimal tradeoffs among sample complexity, accuracy, and privacy for a wide range of fundamental high-dimensional parameter estimation problems, including mean and covariance estimation. We show that this reduction can be implemented in polynomial time in some important special cases. In particular, using nearly-optimal polynomial-time robust estimators for the mean and covariance of high-dimensional Gaussians which are based on the Sum-of-Squares method, we design the first polynomial-time private estimators for these problems with nearly-optimal samples-accuracy-privacy tradeoffs. Our algorithms are also robust to a constant fraction of adversarially-corrupted samples.
translated by 谷歌翻译
We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity. Specifically, given a small number of corrupted samples from a high-dimensional heavy-tailed distribution whose mean $\mu$ is guaranteed to be sparse, the goal is to efficiently compute a hypothesis that accurately approximates $\mu$ with high probability. Prior work had obtained efficient algorithms for robust sparse mean estimation of light-tailed distributions. In this work, we give the first sample-efficient and polynomial-time robust sparse mean estimator for heavy-tailed distributions under mild moment assumptions. Our algorithm achieves the optimal asymptotic error using a number of samples scaling logarithmically with the ambient dimension. Importantly, the sample complexity of our method is optimal as a function of the failure probability $\tau$, having an additive $\log(1/\tau)$ dependence. Our algorithm leverages the stability-based approach from the algorithmic robust statistics literature, with crucial (and necessary) adaptations required in our setting. Our analysis may be of independent interest, involving the delicate design of a (non-spectral) decomposition for positive semi-definite matrices satisfying certain sparsity properties.
translated by 谷歌翻译
我们研究列表可解码的稀疏平均估计问题。具体来说,对于(0,1/2)$的参数$ \ alpha \,我们获得了$ \ mathbb {r}^n $,$ \ lfloor \ alpha m \ rfloor $的$ m $点。来自分销$ d $的样品,带有未知$ k $ -sparse的平均$ \ mu $。没有对剩余点的假设,该点构成了数据集的大多数。目标是返回包含矢量$ \ widehat \ mu $的候选人列表,以便$ \ | \ widehat \ mu - \ mu \ | _2 $很小。先前的工作研究了在密集设置中可列表可调式估计的问题。在这项工作中,我们开发了一种新颖的,概念上的简单技术,用于列表可解码的均值估计。作为我们方法的主要应用,我们为列表可解码的稀疏平均值估计提供了第一个样本和计算有效算法。特别是,对于带有``认证有限的''$ t $ t $ thements in $ k $ -sparse方向和足够轻的尾巴的发行版,我们的算法达到了$(1/\ alpha)^{o(1/t)的错误(1/\ alpha) } $带有示例复杂性$ m =(k \ log(n))^{o(t)}/\ alpha $和运行时间$ \ mathrm {poly}(mn^t)$。对于高斯嵌入式的特殊情况,我们的算法实现了$ \ theta(\ sqrt {\ log(1/\ alpha)})$的最佳错误保证,并具有Quasi-PolyNomial样本和计算复杂性。我们通过几乎匹配的统计查询和低度多项式测试的下限来补充上限。
translated by 谷歌翻译
求解线性系统的迭代方法的收敛速率$ \ mathbf {a} x = b $通常取决于矩阵$ \ mathbf {a} $的条件号。预处理是通过以计算廉价的方式减少该条件号来加速这些方法的常用方式。在本文中,我们通过左或右对角线重构重新审视如何最好地提高$ \ mathbf {a}条件号的数十年。我们在几个方向上取得了这个问题。首先,我们为缩放$ \ mathbf {a} $的经典启发式提供了新的界限(a.k.a.jacobi预处理)。我们证明了这种方法将$ \ MATHBF {a} $的条件号减少到最佳可能缩放的二次因素中。其次,我们为结构化混合包装和覆盖了Semidefinite程序(MPC SDP)提供了一个求解器,它计算$ \ mathbf {a} $ in $ \ widetilde {o}(\ text {nnz}(\ mathbf {a})\ cdot \ text {poly}(\ kappa ^ \ star))$ time;这与在缩放到$ \ widetilde {o}(\ text {poly}(\ kappa ^ \ star))$ factors之后求解线性系统的成本匹配。第三,我们证明了足够一般的宽度无关的MPC SDP求解器将暗示我们考虑的缩放问题的近乎最佳的运行时间,以及与平均调理措施有关的自然变体。最后,我们突出了我们的预处理技术与半随机噪声模型的连接,以及在几种统计回归模型中降低风险的应用。
translated by 谷歌翻译
在共享数据的统计学习和分析中,在联合学习和元学习等平台上越来越广泛地采用,有两个主要问题:隐私和鲁棒性。每个参与的个人都应该能够贡献,而不会担心泄露一个人的敏感信息。与此同时,系统应该在恶意参与者的存在中插入损坏的数据。最近的算法在学习中,学习共享数据专注于这些威胁中的一个,使系统容易受到另一个威胁。我们弥合了这个差距,以获得估计意思的规范问题。样品。我们介绍了素数,这是第一算法,实现了各种分布的隐私和鲁棒性。我们通过新颖的指数时间算法进一步补充了这一结果,提高了素数的样本复杂性,实现了近最优保证并匹配(非鲁棒)私有平均估计的已知下限。这证明没有额外的统计成本同时保证隐私和稳健性。
translated by 谷歌翻译
我们启动差异私有(DP)估计的研究,并访问少量公共数据。为了对D维高斯人进行私人估计,我们假设公共数据来自高斯人,该高斯与私人数据的基础高斯人的总变化距离可能消失了。我们表明,在纯或集中DP的约束下,D+1个公共数据样本足以从私人样本复杂性中删除对私人数据分布的范围参数的任何依赖性,而在没有公共数据的情况下,这是必不可少的。对于分离的高斯混合物,我们假设基本的公共和私人分布是相同的,我们考虑两个设置:(1)当给出独立于维度的公共数据时,可以根据多种方式改善私人样本复杂性混合组件的数量以及对分布范围参数的任何依赖性都可以在近似DP情况下去除; (2)当在维度上给出了一定数量的公共数据线性时,即使在集中的DP下,也可以独立于范围参数使私有样本复杂性使得可以对整体样本复杂性进行其他改进。
translated by 谷歌翻译
我们给出了第一个多项式算法来估计$ d $ -variate概率分布的平均值,从$ \ tilde {o}(d)$独立的样本受到纯粹的差异隐私的界限。此问题的现有算法无论是呈指数运行时间,需要$ \ OMEGA(D ^ {1.5})$样本,或仅满足较弱的集中或近似差分隐私条件。特别地,所有先前的多项式算法都需要$ d ^ {1+ \ omega(1)} $ samples,以保证“加密”高概率,1-2 ^ { - d ^ {\ omega(1) $,虽然我们的算法保留$ \ tilde {o}(d)$ SAMPS复杂性即使在此严格设置中也是如此。我们的主要技术是使用强大的方块方法(SOS)来设计差异私有算法的新方法。算法的证据是在高维算法统计数据中的许多近期作品中的一个关键主题 - 显然需要指数运行时间,但可以通过低度方块证明可以捕获其分析可以自动变成多项式 - 时间算法具有相同的可证明担保。我们展示了私有算法的类似证据现象:工作型指数机制的实例显然需要指数时间,但可以用低度SOS样张分析的指数时间,可以自动转换为多项式差异私有算法。我们证明了捕获这种现象的元定理,我们希望在私人算法设计中广泛使用。我们的技术还在高维度之间绘制了差异私有和强大统计数据之间的新连接。特别是通过我们的校验算法镜头来看,几次研究的SOS证明在近期作品中的算法稳健统计中直接产生了我们差异私有平均估计算法的关键组成部分。
translated by 谷歌翻译
The Forster transform is a method of regularizing a dataset by placing it in {\em radial isotropic position} while maintaining some of its essential properties. Forster transforms have played a key role in a diverse range of settings spanning computer science and functional analysis. Prior work had given {\em weakly} polynomial time algorithms for computing Forster transforms, when they exist. Our main result is the first {\em strongly polynomial time} algorithm to compute an approximate Forster transform of a given dataset or certify that no such transformation exists. By leveraging our strongly polynomial Forster algorithm, we obtain the first strongly polynomial time algorithm for {\em distribution-free} PAC learning of halfspaces. This learning result is surprising because {\em proper} PAC learning of halfspaces is {\em equivalent} to linear programming. Our learning approach extends to give a strongly polynomial halfspace learner in the presence of random classification noise and, more generally, Massart noise.
translated by 谷歌翻译
我们研究了测试有序域上的离散概率分布是否是指定数量的垃圾箱的直方图。$ k $的简洁近似值的最常见工具之一是$ k $ [n] $,是概率分布,在一组$ k $间隔上是分段常数的。直方图测试问题如下:从$ [n] $上的未知分布中给定样品$ \ mathbf {p} $,我们想区分$ \ mathbf {p} $的情况从任何$ k $ - 组织图中,总变化距离的$ \ varepsilon $ -far。我们的主要结果是针对此测试问题的样本接近最佳和计算有效的算法,以及几乎匹配的(在对数因素内)样品复杂性下限。具体而言,我们表明直方图测试问题具有样品复杂性$ \ widetilde \ theta(\ sqrt {nk} / \ varepsilon + k / \ varepsilon^2 + \ sqrt {n} / \ varepsilon^2)$。
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
We establish a simple connection between robust and differentially-private algorithms: private mechanisms which perform well with very high probability are automatically robust in the sense that they retain accuracy even if a constant fraction of the samples they receive are adversarially corrupted. Since optimal mechanisms typically achieve these high success probabilities, our results imply that optimal private mechanisms for many basic statistics problems are robust. We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems. Assuming the Brennan-Bresler secret-leakage planted clique conjecture, we demonstrate a fundamental tradeoff between computational efficiency, privacy leakage, and success probability for sparse mean estimation. Private algorithms which match this tradeoff are not yet known -- we achieve that (up to polylogarithmic factors) in a polynomially-large range of parameters via the Sum-of-Squares method. To establish an information-computation gap for private sparse mean estimation, we also design new (exponential-time) mechanisms using fewer samples than efficient algorithms must use. Finally, we give evidence for privacy-induced information-computation gaps for several other statistics and learning problems, including PAC learning parity functions and estimation of the mean of a multivariate Gaussian.
translated by 谷歌翻译
我们介绍了一个普遍的框架,用于表征差异隐私保证的统计估算问题的统计效率。我们的框架,我们呼叫高维建议 - 试验释放(HPTR),在三个重要组件上建立:指数机制,强大的统计和提议 - 试验释放机制。将所有这些粘在一起是恢复力的概念,这是强大的统计估计的核心。弹性指导算法的设计,灵敏度分析和试验步骤的成功概率分析。关键识别是,如果我们设计了一种仅通过一维鲁棒统计数据访问数据的指数机制,则可以大大减少所产生的本地灵敏度。使用弹性,我们可以提供紧密的本地敏感界限。这些紧张界限在几个案例中容易转化为近乎最佳的实用程序。我们给出了将HPTR应用于统计估计问题的给定实例的一般配方,并在平均估计,线性回归,协方差估计和主成分分析的规范问题上证明了它。我们介绍了一般的公用事业分析技术,证明了HPTR几乎在文献中研究的若干场景下实现了最佳的样本复杂性。
translated by 谷歌翻译
在这项工作中,我们研究了鲁布利地学习Mallows模型的问题。我们给出了一种算法,即使其样本的常数分数是任意损坏的恒定分数,也可以准确估计中央排名。此外,我们的稳健性保证是无关的,因为我们的整体准确性不依赖于排名的替代品的数量。我们的工作可以被认为是从算法稳健统计到投票和信息聚集中的中央推理问题之一的视角的自然输注。具体而言,我们的投票规则是有效的可计算的,并且通过一大群勾结的选民无法改变其结果。
translated by 谷歌翻译