在本文中,我们提出了一种用于在高光谱图像中聚类的新动力学系统算法。该算法的主要思想是,数据点是\``推动\''的方向,该方向是增加密度和最终位于同一密集区域的像素组属于同一类。这本质上是由数据歧管上数据点密度梯度定义的微分方程的数值解。类的数量是自动化的,所得聚类可能非常准确。除了提供准确的聚类外,该算法还提出了一种新的工具,可以理解高维度的高光谱数据。我们在Urban上评估了算法(可在www.tec.ary.mil/hypercube/上获得)场景,将性能与K-Means算法进行比较,使用预识别的材料类作为地面真理。
translated by 谷歌翻译
We review clustering as an analysis tool and the underlying concepts from an introductory perspective. What is clustering and how can clusterings be realised programmatically? How can data be represented and prepared for a clustering task? And how can clustering results be validated? Connectivity-based versus prototype-based approaches are reflected in the context of several popular methods: single-linkage, spectral embedding, k-means, and Gaussian mixtures are discussed as well as the density-based protocols (H)DBSCAN, Jarvis-Patrick, CommonNN, and density-peaks.
translated by 谷歌翻译
聚类是一种无监督的机器学习方法,其中未标记的元素/对象被分组在一起,旨在构建成熟的群集,以根据其相似性对其元素进行分类。该过程的目的是向研究人员提供有用的帮助,以帮助她/他确定数据中的模式。在处理大型数据库时,如果没有聚类算法的贡献,这种模式可能无法轻易检测到。本文对最广泛使用的聚类方法进行了深入的描述,并伴随着有关合适的参数选择和初始化的有用演示。同时,本文不仅代表了一篇评论,该评论突出了所检查的聚类技术的主要要素,而且强调了这些算法基于3个数据集的聚类效率的比较,从而在对抗性和复杂性中揭示了其现有的弱点和能力,在持续的离散和持续的离散和离散和持续的差异。观察。产生的结果有助于我们根据数据集的大小提取有关检查聚类技术的适当性的宝贵结论。
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
在机器学习中调用多种假设需要了解歧管的几何形状和维度,理论决定了需要多少样本。但是,在应用程序数据中,采样可能不均匀,歧管属性是未知的,并且(可能)非纯化;这意味着社区必须适应本地结构。我们介绍了一种用于推断相似性内核提供数据的自适应邻域的算法。从本地保守的邻域(Gabriel)图开始,我们根据加权对应物进行迭代率稀疏。在每个步骤中,线性程序在全球范围内产生最小的社区,并且体积统计数据揭示了邻居离群值可能违反了歧管几何形状。我们将自适应邻域应用于非线性维度降低,地球计算和维度估计。与标准算法的比较,例如使用K-Nearest邻居,证明了它们的实用性。
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
translated by 谷歌翻译
在本文中,我们提出了一种无监督的方法,用于高光谱遥感图像分割。该方法利用了平均移位聚类算法,该算法将作为输入的初步高光谱超像素分割以及光谱像素信息。所提出的方法不需要分割类的数量作为输入参数,也不需要利用有关要分割的土地覆盖或土地使用类型的A-Priori知识(例如水,植被,建筑等)。进行了Salinas,Salinasa,Pavia Center和Pavia University数据集的实验。绩效是根据归一化信息,调整后的RAND指数和F1得分来衡量的。结果证明了该方法与艺术状态相比的有效性。
translated by 谷歌翻译
基于非线性吸引力 - 抑制力的方法(包括T-SNE,UMAP,FORCEATLAS2,grounvis等)主导了维度降低的现代方法。本文的目的是证明所有此类方法,通过设计,都带有一个沿途自动计算的附加功能,即与这些力相关的向量场。我们展示了该向量领域如何提供其他高质量信息,并根据莫尔斯理论的思想提出了一般的完善策略。这些想法的效率是使用T-SNE在合成和现实生活数据集上专门说明的。
translated by 谷歌翻译
由于其简单性和实用性,密度峰值聚类已成为聚类算法的NOVA。但是,这是一个主要的缺点:由于其高计算复杂性,这是耗时的。在此,开发了稀疏搜索和K-D树的密度峰聚类算法来解决此问题。首先,通过使用k-d树来替换原始的全等级距离矩阵来计算稀疏距离矩阵,以加速局部密度的计算。其次,提出了一种稀疏的搜索策略,以加快与$ k $最近邻居的集合与由数据点组成的集合之间的相互分离的计算。此外,采用了决策值的二阶差异方法来自适应确定群集中心。最后,通过与其他六种最先进的聚类算法进行比较,在具有不同分布特性的数据集上进行实验。事实证明,该算法可以有效地将原始DPC的计算复杂性从$ O(n^2k)$降低到$ O(n(n^{1-1/k}+k))$。特别是对于较大的数据集,效率更加明显地提高。此外,聚类精度也在一定程度上提高了。因此,可以得出结论,新提出的算法的总体性能非常好。
translated by 谷歌翻译
由于其数值益处增加及其坚实的数学背景,光谱聚类方法的非线性重构近来的关注。我们在$ p $ -norm中提出了一种新的直接多道谱聚类算法,以$ p \ in(1,2] $。计算图表的多个特征向量的问题$ p $ -laplacian,标准的非线性概括Graph Laplacian,被重用作为Grassmann歧管的无约束最小化问题。$ P $的价值以伪连续的方式减少,促进对应于最佳图形的稀疏解决方案载体作为$ P $接近。监测单调减少平衡图削减了我们从$ P $ -Levels获得的最佳可用解决方案的保证。我们展示了我们算法在各种人工测试案件中的算法的有效性和准确性。我们的数值和比较结果具有各种状态-Art聚类方法表明,所提出的方法在均衡的图形剪切度量和标签分配的准确性方面取得高质量的集群。此外,我们进行S面部图像和手写字符分类的束缚,以展示现实数据集中的适用性。
translated by 谷歌翻译
大多数维度降低方法采用频域表示,从基质对角线化获得,并且对于具有较高固有维度的大型数据集可能不会有效。为了应对这一挑战,相关的聚类和投影(CCP)提供了一种新的数据域策略,不需要解决任何矩阵。CCP将高维特征分配到相关的群集中,然后根据样本相关性将每个集群中的特征分为一个一维表示。引入了残留相似性(R-S)分数和索引,Riemannian歧管中的数据形状以及基于代数拓扑的持久性Laplacian进行可视化和分析。建议的方法通过与各种机器学习算法相关的基准数据集验证。
translated by 谷歌翻译
区域化是将数据集分解为彼此异质的连续均匀区域的行为。存在许多不同的算法用于进行区域化;但是,在大型现实世界数据集上使用这些算法仅在近年来的计算功率方面变得可行。比较了不同的区域化方法,并且确实缺乏分析记忆,可扩展性,地理指标和大规模现实世界应用的研究。这项研究使用现实世界的健康决定因素(SDOH)数据比较了最新的区域化方法,即集聚聚类,滑冰者,REDCAP,AZP和MAX-P区域。在本研究中,现实世界中SDOH数据的规模最多100万个数据点,不仅比较了不同数据集的算法,而且为每种单独的区域化算法提供了应力测试,其中大多数以前从未在此类尺度上运行。我们使用几个新的地理指标来比较算法并执行比较记忆分析。然后,将普遍的区域化方法与无限制的K-均值聚类进行比较,它们在弗吉尼亚州和华盛顿特区分离实际健康数据的能力。
translated by 谷歌翻译
Both clustering and outlier detection play an important role for meteorological measurements. We present the AWT algorithm, a clustering algorithm for time series data that also performs implicit outlier detection during the clustering. AWT integrates ideas of several well-known K-Means clustering algorithms. It chooses the number of clusters automatically based on a user-defined threshold parameter, and it can be used for heterogeneous meteorological input data as well as for data sets that exceed the available memory size. We apply AWT to crowd sourced 2-m temperature data with an hourly resolution from the city of Vienna to detect outliers and to investigate if the final clusters show general similarities and similarities with urban land-use characteristics. It is shown that both the outlier detection and the implicit mapping to land-use characteristic is possible with AWT which opens new possible fields of application, specifically in the rapidly evolving field of urban climate and urban weather.
translated by 谷歌翻译
基于拓扑的维度减少方法,如T-SNE和UMAP,已经看到了高维数据的成功和普及。这些方法具有强大的数学基础,基于直觉,即低维度的拓扑应接近高维度。鉴于初始拓扑结构是算法成功的前兆,这自然提出了问题:是什么使得维数减少的“良好”拓扑结构?深入了解这将使我们能够设计更好的算法,该算法考虑到本地和全局结构。在专注于UMAP的本文中,我们研究节点连接(k最近邻居与互相k离邻居)和相对邻域(相邻通孔邻居)的影响对维数减少。我们通过关于4标准图像和文本数据集的广泛消融研究探索这些概念; Mnist,Fmnist,20ng,Ag,减少2和64个尺寸。我们的研究结果表明,连接局部邻域(PATH邻居)的灵活方法更加精致的连接(相互K最近邻居)的概念,可以实现比下游测量的默认UMAP更好的表示聚类性能。
translated by 谷歌翻译
The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.
translated by 谷歌翻译
We present a novel clustering algorithm, visClust, that is based on lower dimensional data representations and visual interpretation. Thereto, we design a transformation that allows the data to be represented by a binary integer array enabling the further use of image processing methods to select a partition. Qualitative and quantitative analyses show that the algorithm obtains high accuracy (measured with an adjusted one-sided Rand-Index) and requires low runtime and RAM. We compare the results to 6 state-of-the-art algorithms, confirming the quality of visClust by outperforming in most experiments. Moreover, the algorithm asks for just one obligatory input parameter while allowing optimization via optional parameters. The code is made available on GitHub.
translated by 谷歌翻译
Experimental sciences have come to depend heavily on our ability to organize, interpret and analyze high-dimensional datasets produced from observations of a large number of variables governed by natural processes. Natural laws, conservation principles, and dynamical structure introduce intricate inter-dependencies among these observed variables, which in turn yield geometric structure, with fewer degrees of freedom, on the dataset. We show how fine-scale features of this structure in data can be extracted from \emph{discrete} approximations to quantum mechanical processes given by data-driven graph Laplacians and localized wavepackets. This data-driven quantization procedure leads to a novel, yet natural uncertainty principle for data analysis induced by limited data. We illustrate the new approach with algorithms and several applications to real-world data, including the learning of patterns and anomalies in social distancing and mobility behavior during the COVID-19 pandemic.
translated by 谷歌翻译
异常和异常值检测是机器学习中的长期问题。在某些情况下,异常检测容易,例如当从诸如高斯的良好特征的分布中抽出数据时。但是,当数据占据高维空间时,异常检测变得更加困难。我们呈现蛤蜊(聚类学习近似歧管),是任何度量空间中的歧管映射技术。 CLAM以快速分层聚类技术开始,然后根据使用多个几何和拓扑功能所选择的重叠群集,从群集树中引导图表。使用这些图形,我们实现了Chaoda(群集分层异常和异常值检测算法),探索了图形的各种属性及其组成集群以查找异常值。 Chaoda采用了一种基于培训数据集的转移学习形式,并将这些知识应用于不同基数,维度和域的单独测试集。在24个公开可用的数据集上,我们将Chaoda(按衡量ROC AUC)与各种最先进的无监督异常检测算法进行比较。六个数据集用于培训。 Chaoda优于16个剩余的18个数据集的其他方法。 CLAM和Chaoda规模大,高维“大数据”异常检测问题,并贯穿数据集和距离函数。克拉姆和Chaoda的源代码在github上自由地提供https://github.com/uri-abd/clam。
translated by 谷歌翻译
Standard agglomerative clustering suggests establishing a new reliable linkage at every step. However, in order to provide adaptive, density-consistent and flexible solutions, we study extracting all the reliable linkages at each step, instead of the smallest one. Such a strategy can be applied with all common criteria for agglomerative hierarchical clustering. We also study that this strategy with the single linkage criterion yields a minimum spanning tree algorithm. We perform experiments on several real-world datasets to demonstrate the performance of this strategy compared to the standard alternative.
translated by 谷歌翻译