我们开发了从运动管道的结构中恢复损坏的keypoint匹配的新统计信息。统计信息基于Keypoint匹配图的群集结构中出现的一致性约束。统计数据旨在为损坏的匹配和未损坏的匹配提供较小的值。这些新统计数据与迭代重新重量方案相结合以过滤关键点,然后可以将其从运动管道馈送到任何标准结构中。可以有效地实现该滤波方法并将其缩放到大规模的数据集,因为它仅需要稀疏矩阵乘法。我们展示了这种方法对来自运动数据集的合成和实际结构的功效,并表明它在这些任务中实现了最先进的准确性和速度。
translated by 谷歌翻译
我们提出了一种新型的二次编程公式,用于估计群体同步中的损坏水平,并使用这些估计来解决此问题。我们的目标函数利用了组的循环一致性,因此我们将我们的方法称为结构一致性(DESC)的检测和估计。该一般框架可以扩展到其他代数和几何结构。我们的表述具有以下优势:它可以忍受与信息理论界限一样高的腐败,它不需要对小组元素的估计值进行良好的初始化,它具有简单的解释,在某些温和的条件下,我们的全球最小值目标函数准确恢复了腐败水平。我们证明了方法在旋转平均的合成和真实数据实验上的竞争精度。
translated by 谷歌翻译
随机块模型(SBM)是一个随机图模型,其连接不同的顶点组不同。它被广泛用作研究聚类和社区检测的规范模型,并提供了肥沃的基础来研究组合统计和更普遍的数据科学中出现的信息理论和计算权衡。该专着调查了最近在SBM中建立社区检测的基本限制的最新发展,无论是在信息理论和计算方案方面,以及各种恢复要求,例如精确,部分和弱恢复。讨论的主要结果是在Chernoff-Hellinger阈值中进行精确恢复的相转换,Kesten-Stigum阈值弱恢复的相变,最佳的SNR - 单位信息折衷的部分恢复以及信息理论和信息理论之间的差距计算阈值。该专着给出了在寻求限制时开发的主要算法的原则推导,特别是通过绘制绘制,半定义编程,(线性化)信念传播,经典/非背带频谱和图形供电。还讨论了其他块模型的扩展,例如几何模型和一些开放问题。
translated by 谷歌翻译
我们考虑了一个类别级别的感知问题,其中给定的2D或3D传感器数据描绘了给定类别的对象(例如,汽车),并且必须重建尽管级别的可变性,但必须重建对象的3D姿势和形状(即,不同的汽车模型具有不同的形状)。我们考虑了一个主动形状模型,其中 - 对于对象类别 - 我们获得了一个潜在的CAD模型库,描述该类别中的对象,我们采用了标准公式,其中姿势和形状是通过非非2D或3D关键点估算的-convex优化。我们的第一个贡献是开发PACE3D*和PACE2D*,这是第一个使用3D和2D关键点进行姿势和形状估计的最佳最佳求解器。这两个求解器都依赖于紧密(即精确)半决赛的设计。我们的第二个贡献是开发两个求解器的异常刺激版本,命名为PACE3D#和PACE2D#。为了实现这一目标,我们提出了Robin,Robin是一种一般的图理论框架来修剪异常值,该框架使用兼容性超图来建模测量的兼容性。我们表明,在类别级别的感知问题中,这些超图可以是通过关键点(以2D)或其凸壳(以3D为单位)构建的,并且可以通过最大的超级计算来修剪许多异常值。最后的贡献是广泛的实验评估。除了在模拟数据集和Pascal数据集上提供消融研究外,我们还将求解器与深关键点检测器相结合,并证明PACE3D#在Apolloscape数据集中在车辆姿势估算中改进了最新技术,并且其运行时间是兼容的使用实际应用。
translated by 谷歌翻译
我们根据计算一个扎根于每个顶点的某个加权树的家族而构成的相似性得分提出了一种有效的图形匹配算法。对于两个erd \ h {o} s-r \'enyi图$ \ mathcal {g}(n,q)$,其边缘通过潜在顶点通信相关联,我们表明该算法正确地匹配了所有范围的范围,除了所有的vertices分数外,有了很高的概率,前提是$ nq \ to \ infty $,而边缘相关系数$ \ rho $满足$ \ rho^2> \ alpha \ ailpha \大约0.338 $,其中$ \ alpha $是Otter的树木计数常数。此外,在理论上是必需的额外条件下,可以精确地匹配。这是第一个以显式常数相关性成功的多项式图匹配算法,并适用于稀疏和密集图。相比之下,以前的方法要么需要$ \ rho = 1-o(1)$,要么仅限于稀疏图。该算法的症结是一个经过精心策划的植根树的家族,称为吊灯,它可以有效地从同一树的计数中提取图形相关性,同时抑制不同树木之间的不良相关性。
translated by 谷歌翻译
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences.This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds.The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
通常,使用网络编码在物理,生物,社会和信息科学中应用程序中复杂系统中实体之间的交互体系结构。为了研究复杂系统的大规模行为,研究网络中的中尺度结构是影响这种行为的构件。我们提出了一种新方法来描述网络中的低率中尺度结构,并使用多种合成网络模型和经验友谊,协作和蛋白质 - 蛋白质相互作用(PPI)网络说明了我们的方法。我们发现,这些网络拥有相对较少的“潜在主题”,可以成功地近似固定的中尺度上网络的大多数子图。我们使用一种称为“网络词典学习”(NDL)的算法,该算法结合了网络采样方法和非负矩阵分解,以学习给定网络的潜在主题。使用一组潜在主题对网络进行编码的能力具有多种应用于网络分析任务的应用程序,例如比较,降解和边缘推理。此外,使用我们的新网络去核和重建(NDR)算法,我们演示了如何通过仅使用直接从损坏的网络中学习的潜在主题来贬低损坏的网络。
translated by 谷歌翻译
社区检测和正交组同步是科学和工程中各种重要应用的基本问题。在这项工作中,我们考虑了社区检测和正交组同步的联合问题,旨在恢复社区并同时执行同步。为此,我们提出了一种简单的算法,该算法由频谱分解步骤组成,然后是彼此枢转的QR分解(CPQR)。所提出的算法与数据点数线性有效且缩放。我们还利用最近开发的“休闲一淘汰”技术来建立近乎最佳保证,以确切地恢复集群成员资格,并稳定地恢复正交变换。数值实验证明了我们算法的效率和功效,并确认了我们的理论表征。
translated by 谷歌翻译
Outier-bubust估计是一个基本问题,已由统计学家和从业人员进行了广泛的研究。在过去的几年中,整个研究领域的融合都倾向于“算法稳定统计”,该统计数据的重点是开发可拖动的异常体 - 固定技术来解决高维估计问题。尽管存在这种融合,但跨领域的研究工作主要彼此断开。本文桥接了有关可认证的异常抗衡器估计的最新工作,该估计是机器人技术和计算机视觉中的几何感知,并在健壮的统计数据中并行工作。特别是,我们适应并扩展了最新结果对可靠的线性回归(适用于<< 50%异常值的低外壳案例)和列表可解码的回归(适用于>> 50%异常值的高淘汰案例)在机器人和视觉中通常发现的设置,其中(i)变量(例如旋转,姿势)属于非convex域,(ii)测量值是矢量值,并且(iii)未知的异常值是先验的。这里的重点是绩效保证:我们没有提出新算法,而是为投入测量提供条件,在该输入测量值下,保证现代估计算法可以在存在异常值的情况下恢复接近地面真相的估计值。这些条件是我们所谓的“估计合同”。除了现有结果的拟议扩展外,我们认为本文的主要贡献是(i)通过指出共同点和差异来统一平行的研究行,(ii)在介绍先进材料(例如,证明总和证明)中的统一行为。对从业者的可访问和独立的演讲,(iii)指出一些即时的机会和开放问题,以发出异常的几何感知。
translated by 谷歌翻译
我们提出了对学度校正随机块模型(DCSBM)的合适性测试。该测试基于调整后的卡方统计量,用于测量$ n $多项式分布的组之间的平等性,该分布具有$ d_1,\ dots,d_n $观测值。在网络模型的背景下,多项式的数量($ n $)的数量比观测值数量($ d_i $)快得多,与节点$ i $的度相对应,因此设置偏离了经典的渐近学。我们表明,只要$ \ {d_i \} $的谐波平均值生长到无穷大,就可以使统计量在NULL下分配。顺序应用时,该测试也可以用于确定社区数量。该测试在邻接矩阵的压缩版本上进行操作,因此在学位上有条件,因此对大型稀疏网络具有高度可扩展性。我们结合了一个新颖的想法,即在测试$ K $社区时根据$(k+1)$ - 社区分配来压缩行。这种方法在不牺牲计算效率的情况下增加了顺序应用中的力量,我们证明了它在恢复社区数量方面的一致性。由于测试统计量不依赖于特定的替代方案,因此其效用超出了顺序测试,可用于同时测试DCSBM家族以外的各种替代方案。特别是,我们证明该测试与具有社区结构的潜在可变性网络模型的一般家庭一致。
translated by 谷歌翻译
相机的估计与一组图像相关联的估计通常取决于图像之间的特征匹配。相比之下,我们是第一个通过使用对象区域来指导姿势估计问题而不是显式语义对象检测来应对这一挑战的人。我们提出了姿势炼油机网络(PosErnet),一个轻量级的图形神经网络,以完善近似的成对相对摄像头姿势。posernet利用对象区域之间的关联(简洁地表示为边界框),跨越了多个视图到全球完善的稀疏连接的视图图。我们在不同尺寸的图表上评估了7个尺寸的数据集,并展示了该过程如何有益于基于优化的运动平均算法,从而相对于基于边界框获得的初始估计,将旋转的中值误差提高了62度。代码和数据可在https://github.com/iit-pavis/posernet上找到。
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
Many challenges from natural world can be formulated as a graph matching problem. Previous deep learning-based methods mainly consider a full two-graph matching setting. In this work, we study the more general partial matching problem with multi-graph cycle consistency guarantees. Building on a recent progress in deep learning on graphs, we propose a novel data-driven method (URL) for partial multi-graph matching, which uses an object-to-universe formulation and learns latent representations of abstract universe points. The proposed approach advances the state of the art in semantic keypoint matching problem, evaluated on Pascal VOC, CUB, and Willow datasets. Moreover, the set of controlled experiments on a synthetic graph matching dataset demonstrates the scalability of our method to graphs with large number of nodes and its robustness to high partiality.
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
Kronecker产品的自然概括是Kronecker产品的张量Kronecker产品,在多个研究社区中独立出现。像它们的矩阵对应物一样,张量的概括为隐式乘法和分解定理提供了结构。我们提出了一个定理,该定理将张量kronecker产品的主要特征向量分解,这是从矩阵理论到张量特征向量的罕见概括。该定理意味着在kronecker产品的张量功率方法的迭代中应该存在低级结构。我们研究了网络对齐算法TAME中的低等级结构,这是一种功率方法启发式方法。直接或通过新的启发式嵌入方法使用低级结构,我们生成的新算法在提高或保持准确性的同时更快,并扩展到无法通过现有技术实际处理的问题。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
This article explores and analyzes the unsupervised clustering of large partially observed graphs. We propose a scalable and provable randomized framework for clustering graphs generated from the stochastic block model. The clustering is first applied to a sub-matrix of the graph's adjacency matrix associated with a reduced graph sketch constructed using random sampling. Then, the clusters of the full graph are inferred based on the clusters extracted from the sketch using a correlation-based retrieval step. Uniform random node sampling is shown to improve the computational complexity over clustering of the full graph when the cluster sizes are balanced. A new random degree-based node sampling algorithm is presented which significantly improves upon the performance of the clustering algorithm even when clusters are unbalanced. This framework improves the phase transitions for matrix-decomposition-based clustering with regard to computational complexity and minimum cluster size, which are shown to be nearly dimension-free in the low inter-cluster connectivity regime. A third sampling technique is shown to improve balance by randomly sampling nodes based on spatial distribution. We provide analysis and numerical results using a convex clustering algorithm based on matrix completion.
translated by 谷歌翻译
近年来,基于Weisfeiler-Leman算法的算法和神经架构,是一个众所周知的Graph同构问题的启发式问题,它成为具有图形和关系数据的机器学习的强大工具。在这里,我们全面概述了机器学习设置中的算法的使用,专注于监督的制度。我们讨论了理论背景,展示了如何将其用于监督的图形和节点表示学习,讨论最近的扩展,并概述算法的连接(置换 - )方面的神经结构。此外,我们概述了当前的应用和未来方向,以刺激进一步的研究。
translated by 谷歌翻译