Monge Map是指两个概率分布之间的最佳运输映射,并提供了将一个分发转换为另一个的原则方法。尽管最佳运输问题的数值方法的快速发展,但计算Monge地图仍然具有挑战性,特别是对于高维问题。在本文中,我们提出了一种可扩展算法,用于计算两个概率分布之间的Monge地图。我们的算法基于最佳运输问题的弱形式,因此它仅需要来自边缘的样本而不是其分析表达式,并且可以容纳两个具有不同尺寸的分布之间的最佳运输。我们的算法适用于一般成本函数,与其他现有方法相比,用于使用样本估计Monge Maps的方法,这些方法通常用于二次成本。通过具有合成和现实数据的一系列实验来证明我们的算法的性能。
translated by 谷歌翻译
Wasserstein BaryCenter是一种原理的方法来表示给定的一组概率分布的加权平均值,利用由最佳运输所引起的几何形状。在这项工作中,我们提出了一种新颖的可扩展算法,以近似于旨在在机器学习中的高维应用的Wassersein重构。我们所提出的算法基于Wassersein-2距离的Kantorovich双重制定以及最近的神经网络架构,输入凸神经网络,其已知参数化凸函数。我们方法的显着特征是:i)仅需要来自边缘分布的样本; ii)与现有方法不同,它代表了具有生成模型的重心,因此可以在不查询边际分布的情况下从重心产生无限样品; III)它与一个边际案例中的生成对抗性模型类似。我们通过在多个实验中将其与最先进的方法进行比较来证明我们的算法的功效。
translated by 谷歌翻译
Wasserstein barycenter, built on the theory of optimal transport, provides a powerful framework to aggregate probability distributions, and it has increasingly attracted great attention within the machine learning community. However, it suffers from severe computational burden, especially for high dimensional and continuous settings. To this end, we develop a novel continuous approximation method for the Wasserstein barycenters problem given sample access to the input distributions. The basic idea is to introduce a variational distribution as the approximation of the true continuous barycenter, so as to frame the barycenters computation problem as an optimization problem, where parameters of the variational distribution adjust the proxy distribution to be similar to the barycenter. Leveraging the variational distribution, we construct a tractable dual formulation for the regularized Wasserstein barycenter problem with c-cyclical monotonicity, which can be efficiently solved by stochastic optimization. We provide theoretical analysis on convergence and demonstrate the practical effectiveness of our method on real applications of subset posterior aggregation and synthetic data.
translated by 谷歌翻译
这项工作研究了在时间数据上对预期功能值的分配评估。一组替代措施的特征是因果最佳运输。我们证明了强大的二元性并重铸了因无限维测试功能空间的最小化因果关系的约束。我们通过神经网络近似测试函数,并证明了带有Rademacher复杂性的样品复杂性。此外,当可以使用结构信息来进一步限制歧义集时,我们证明了双重公式并提供有效的优化方法。对实现波动率和库存指数的实证分析表明,我们的框架为经典最佳运输配方提供了有吸引力的替代品。
translated by 谷歌翻译
We consider the constrained sampling problem where the goal is to sample from a distribution $\pi(x)\propto e^{-f(x)}$ and $x$ is constrained on a convex body $\mathcal{C}\subset \mathbb{R}^d$. Motivated by penalty methods from optimization, we propose penalized Langevin Dynamics (PLD) and penalized Hamiltonian Monte Carlo (PHMC) that convert the constrained sampling problem into an unconstrained one by introducing a penalty function for constraint violations. When $f$ is smooth and the gradient is available, we show $\tilde{\mathcal{O}}(d/\varepsilon^{10})$ iteration complexity for PLD to sample the target up to an $\varepsilon$-error where the error is measured in terms of the total variation distance and $\tilde{\mathcal{O}}(\cdot)$ hides some logarithmic factors. For PHMC, we improve this result to $\tilde{\mathcal{O}}(\sqrt{d}/\varepsilon^{7})$ when the Hessian of $f$ is Lipschitz and the boundary of $\mathcal{C}$ is sufficiently smooth. To our knowledge, these are the first convergence rate results for Hamiltonian Monte Carlo methods in the constrained sampling setting that can handle non-convex $f$ and can provide guarantees with the best dimension dependency among existing methods with deterministic gradients. We then consider the setting where unbiased stochastic gradients are available. We propose PSGLD and PSGHMC that can handle stochastic gradients without Metropolis-Hasting correction steps. When $f$ is strongly convex and smooth, we obtain an iteration complexity of $\tilde{\mathcal{O}}(d/\varepsilon^{18})$ and $\tilde{\mathcal{O}}(d\sqrt{d}/\varepsilon^{39})$ respectively in the 2-Wasserstein distance. For the more general case, when $f$ is smooth and non-convex, we also provide finite-time performance bounds and iteration complexity results. Finally, we test our algorithms on Bayesian LASSO regression and Bayesian constrained deep learning problems.
translated by 谷歌翻译
我们研究了随着正则化参数的消失,差异调节的最佳转运的收敛性消失。一般差异的尖锐费率包括相对熵或$ l^{p} $正则化,一般运输成本和多边界问题。使用量化和Martingale耦合的新方法适用于非紧密的边际和实现,特别是对于所有有限$(2+ \ delta)$ - 时刻的边缘的熵正规化2-wasserstein距离的尖锐前阶项。
translated by 谷歌翻译
在概率密度范围内相对于Wassersein度量的空间的梯度流程通常具有很好的特性,并且已在几种机器学习应用中使用。计算Wasserstein梯度流量的标准方法是有限差异,使网格上的基础空间离散,并且不可扩展。在这项工作中,我们提出了一种可扩展的近端梯度型算法,用于Wassersein梯度流。我们的方法的关键是目标函数的变分形式,这使得可以通过引流 - 双重优化实现JKO近端地图。可以通过替代地更新内部和外环中的参数来有效地解决该原始问题。我们的框架涵盖了包括热方程和多孔介质方程的所有经典Wasserstein梯度流。我们展示了若干数值示例的算法的性能和可扩展性。
translated by 谷歌翻译
本文介绍了一种新的基于仿真的推理程序,以对访问I.I.D. \ samples的多维概率分布进行建模和样本,从而规避明确建模密度函数或设计Markov Chain Monte Carlo的通常方法。我们提出了一个称为可逆的Gromov-monge(RGM)距离的新概念的距离和同构的动机,并研究了RGM如何用于设计新的转换样本,以执行基于模拟的推断。我们的RGM采样器还可以估计两个异质度量度量空间之间的最佳对齐$(\ cx,\ mu,c _ {\ cx})$和$(\ cy,\ cy,\ nu,c _ {\ cy})$从经验数据集中,估计的地图大约将一个量度$ \ mu $推向另一个$ \ nu $,反之亦然。我们研究了RGM距离的分析特性,并在轻度条件下得出RGM等于经典的Gromov-Wasserstein距离。奇怪的是,与Brenier的两极分解结合了连接,我们表明RGM采样器以$ C _ {\ cx} $和$ C _ {\ cy} $的正确选择诱导了强度同构的偏见。研究了有关诱导采样器的收敛,表示和优化问题的统计率。还展示了展示RGM采样器有效性的合成和现实示例。
translated by 谷歌翻译
Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance. Recent work suggests that this quantity is more robust than the standard Wasserstein distance, in particular when comparing probability measures in high-dimensions. However, it is ruled out for practical application because the optimization model is essentially non-convex and non-smooth which makes the computation intractable. Our contribution in this paper is to revisit the original motivation behind WPP/PRW, but take the hard route of showing that, despite its non-convexity and lack of nonsmoothness, and even despite some hardness results proved by~\citet{Niles-2019-Estimation} in a minimax sense, the original formulation for PRW/WPP \textit{can} be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than its convex relaxation. More specifically, we provide three simple algorithms with solid theoretical guarantee on their complexity bound (one in the appendix), and demonstrate their effectiveness and efficiency by conducing extensive experiments on synthetic and real data. This paper provides a first step into a computational theory of the PRW distance and provides the links between optimal transport and Riemannian optimization.
translated by 谷歌翻译
We study distributionally robust optimization (DRO) with Sinkhorn distance -- a variant of Wasserstein distance based on entropic regularization. We provide convex programming dual reformulation for a general nominal distribution. Compared with Wasserstein DRO, it is computationally tractable for a larger class of loss functions, and its worst-case distribution is more reasonable. We propose an efficient first-order algorithm with bisection search to solve the dual reformulation. We demonstrate that our proposed algorithm finds $\delta$-optimal solution of the new DRO formulation with computation cost $\tilde{O}(\delta^{-3})$ and memory cost $\tilde{O}(\delta^{-2})$, and the computation cost further improves to $\tilde{O}(\delta^{-2})$ when the loss function is smooth. Finally, we provide various numerical examples using both synthetic and real data to demonstrate its competitive performance and light computational speed.
translated by 谷歌翻译
引入了Wasserstein距离的许多变体,以减轻其原始计算负担。尤其是切成薄片的距离(SW),该距离(SW)利用了一维投影,可以使用封闭式的瓦斯汀距离解决方案。然而,它仅限于生活在欧几里得空间中的数据,而Wasserstein距离已被研究和最近在歧管上使用。我们更具体地专门地关注球体,为此定义了新颖的SW差异,我们称之为球形切片 - 拖鞋,这是朝着定义SW差异的第一步。我们的构造明显基于圆圈上瓦斯汀距离的封闭式解决方案,以及新的球形ra径。除了有效的算法和相应的实现外,我们在几个机器学习用例中说明了它的属性,这些用例中,数据的球形表示受到威胁:在球体上的密度估计,变异推理或超球体自动编码器。
translated by 谷歌翻译
通过在线规范相关性分析的问题,我们提出了\ emph {随机缩放梯度下降}(SSGD)算法,以最小化通用riemannian歧管上的随机功能的期望。 SSGD概括了投影随机梯度下降的思想,允许使用缩放的随机梯度而不是随机梯度。在特殊情况下,球形约束的特殊情况,在广义特征向量问题中产生的,我们建立了$ \ sqrt {1 / t} $的令人反感的有限样本,并表明该速率最佳最佳,直至具有积极的积极因素相关参数。在渐近方面,一种新的轨迹平均争论使我们能够实现局部渐近常态,其速率与鲁普特 - Polyak-Quaditsky平均的速率匹配。我们将这些想法携带在一个在线规范相关分析,从事文献中的第一次获得了最佳的一次性尺度算法,其具有局部渐近融合到正常性的最佳一次性尺度算法。还提供了用于合成数据的规范相关分析的数值研究。
translated by 谷歌翻译
比较概率分布是许多机器学习算法的关键。最大平均差异(MMD)和最佳运输距离(OT)是在过去几年吸引丰富的关注的概率措施之间的两类距离。本文建立了一些条件,可以通过MMD规范控制Wassersein距离。我们的作品受到压缩统计学习(CSL)理论的推动,资源有效的大规模学习的一般框架,其中训练数据总结在单个向量(称为草图)中,该训练数据捕获与所考虑的学习任务相关的信息。在CSL中的现有结果启发,我们介绍了H \“较旧的较低限制的等距属性(H \”较旧的LRIP)并表明这家属性具有有趣的保证对压缩统计学习。基于MMD与Wassersein距离之间的关系,我们通过引入和研究学习任务的Wassersein可读性的概念来提供压缩统计学习的保证,即概率分布之间的某些特定于特定的特定度量,可以由Wassersein界定距离。
translated by 谷歌翻译
The Sinkhorn algorithm (arXiv:1306.0895) is the state-of-the-art to compute approximations of optimal transport distances between discrete probability distributions, making use of an entropically regularized formulation of the problem. The algorithm is guaranteed to converge, no matter its initialization. This lead to little attention being paid to initializing it, and simple starting vectors like the n-dimensional one-vector are common choices. We train a neural network to compute initializations for the algorithm, which significantly outperform standard initializations. The network predicts a potential of the optimal transport dual problem, where training is conducted in an adversarial fashion using a second, generating network. The network is universal in the sense that it is able to generalize to any pair of distributions of fixed dimension. Furthermore, we show that for certain applications the network can be used independently.
translated by 谷歌翻译
我们为Nesterov在概率空间中加速的梯度流提供了一个框架,以设计有效的平均田间马尔可夫链蒙特卡洛(MCMC)贝叶斯逆问题算法。在这里,考虑了四个信息指标的示例,包括Fisher-Rao Metric,Wasserstein-2 Metric,Kalman-Wasserstein Metric和Stein Metric。对于Fisher-Rao和Wasserstein-2指标,我们都证明了加速梯度流的收敛性。在实施中,我们建议使用重新启动技术的Wasserstein-2,Kalman-Wasseintein和Stein加速梯度流的抽样效率离散算法。我们还制定了一种内核带宽选择方法,该方法从布朗动物样品中学习了密度对数的梯度。与最先进的算法相比,包括贝叶斯逻辑回归和贝叶斯神经网络在内的数值实验显示了所提出方法的强度。
translated by 谷歌翻译
In this paper, we propose Wasserstein Isometric Mapping (Wassmap), a nonlinear dimensionality reduction technique that provides solutions to some drawbacks in existing global nonlinear dimensionality reduction algorithms in imaging applications. Wassmap represents images via probability measures in Wasserstein space, then uses pairwise Wasserstein distances between the associated measures to produce a low-dimensional, approximately isometric embedding. We show that the algorithm is able to exactly recover parameters of some image manifolds including those generated by translations or dilations of a fixed generating measure. Additionally, we show that a discrete version of the algorithm retrieves parameters from manifolds generated from discrete measures by providing a theoretical bridge to transfer recovery results from functional data to discrete data. Testing of the proposed algorithms on various image data manifolds show that Wassmap yields good embeddings compared with other global and local techniques.
translated by 谷歌翻译
We study a family of adversarial multiclass classification problems and provide equivalent reformulations in terms of: 1) a family of generalized barycenter problems introduced in the paper and 2) a family of multimarginal optimal transport problems where the number of marginals is equal to the number of classes in the original classification problem. These new theoretical results reveal a rich geometric structure of adversarial learning problems in multiclass classification and extend recent results restricted to the binary classification setting. A direct computational implication of our results is that by solving either the barycenter problem and its dual, or the MOT problem and its dual, we can recover the optimal robust classification rule and the optimal adversarial strategy for the original adversarial problem. Examples with synthetic and real data illustrate our results.
translated by 谷歌翻译
最近表明,在光滑状态下,可以通过吸引统计误差上限可以有效地计算两个分布之间的平方Wasserstein距离。然而,而不是距离本身,生成建模等应用的感兴趣对象是底层的最佳运输地图。因此,需要为估计的地图本身获得计算和统计保证。在本文中,我们提出了第一种统计$ L ^ 2 $错误的第一批量算法几乎匹配了现有的最低限度用于平滑地图估计。我们的方法是基于解决具有无限尺寸的平方和重构的最佳运输的半双向配方,并导致样品数量的无尺寸多项式速率的算法,具有潜在指数的维度依赖性常数。
translated by 谷歌翻译
Wassersein距离,植根于最佳运输(OT)理论,是在统计和机器学习的各种应用程序之间的概率分布之间的流行差异测量。尽管其结构丰富,但效用,但Wasserstein距离对所考虑的分布中的异常值敏感,在实践中阻碍了适用性。灵感来自Huber污染模型,我们提出了一种新的异常值 - 强大的Wasserstein距离$ \ mathsf {w} _p ^ \ varepsilon $,它允许从每个受污染的分布中删除$ \ varepsilon $异常块。与以前考虑的框架相比,我们的配方达到了高度定期的优化问题,使其更好地分析。利用这一点,我们对$ \ mathsf {w} _p ^ \ varepsilon $的彻底理论研究,包括最佳扰动,规律性,二元性和统计估算和鲁棒性结果的表征。特别是,通过解耦优化变量,我们以$ \ mathsf {w} _p ^ \ varepsilon $到达一个简单的双重形式,可以通过基于标准的基于二元性的OT响音器的基本修改来实现。我们通过应用程序来说明我们的框架的好处,以与受污染的数据集进行生成建模。
translated by 谷歌翻译
我们提出了两个连续分布之间的最佳传输方法(OT)问题的方法(x_1-x_0)] $在耦合$(x_0,x_1)$的集合中,其在$ x_0,x_1 $等于$ \ pi_0,\ pi_1 $上的边缘分布,其中$ c $是成本函数。我们的方法迭代地构建了一系列神经普通可区分的方程式(ODE),每个方程式(ODE)通过求解简单的无约束回归问题来学习,该问题可以单调地降低运输成本,同时自动保留边缘约束。这产生了一种单调的内部方法,该方法在有效耦合的集合中穿越以降低运输成本,从而将自身与大多数现有方法区分开来,从而强制执行耦合约束与外部。该方法的主要思想是从整流流程中获取的,最近的一种方法可以同时降低凸函数$ c $引起的整个运输成本(因此本质上是多目标),但并非量身定制以最大程度地减少特定的运输成本。我们的方法是整流流的单对象变体,可以保证为固定的,用户指定的凸成本函数$ c $解决OT问题。
translated by 谷歌翻译