It is crucial to evaluate the quality and determine the optimal number of clusters in cluster analysis. In this paper, the multi-granularity characterization of the data set is carried out to obtain the hyper-balls. The cluster internal evaluation index based on hyper-balls(HCVI) is defined. Moreover, a general method for determining the optimal number of clusters based on HCVI is proposed. The proposed methods can evaluate the clustering results produced by the several classic methods and determine the optimal cluster number for data sets containing noises and clusters with arbitrary shapes. The experimental results on synthetic and real data sets indicate that the new index outperforms existing ones.
translated by 谷歌翻译
基于细粒的草图的图像检索(FG-SBIR)解决了在给定查询草图中检索特定照片的问题。然而,它的广泛适用性受到大多数人为大多数人绘制完整草图的事实的限制,并且绘图过程经常需要时间。在这项研究中,我们的目标是用最少数量的笔划检索目标照片(不完整草图),命名为vs-the-fry fg-sbir(bhunia等人.2020),它一旦尽快开始检索每个行程绘图开始。我们认为每张照片的草图绘图集中的这些不完整草图之间存在显着相关性。为了了解照片和ITS不完整的草图之间共享的更高效的联合嵌入空间,我们提出了一个多粒度关联学习框架,进一步优化了所有不完整草图的嵌入空间。具体地,基于草图的完整性,我们可以将完整的草图插曲分为几个阶段,每个阶段对应于简单的线性映射层。此外,我们的框架指导了当前草图的矢量空间表示,以近似速写,以实现草图的检索性能,以利用更多的笔触来接近草图的草图。在实验中,我们提出了更现实的挑战,我们的方法在两个公开的细粒草图检索数据集上实现了最先进的方法和替代基线的卓越的早期检索效率。
translated by 谷歌翻译
颗粒球计算是一种有效,坚固,可扩展,可扩展和粒度计算的学习方法。颗粒球计算的基础是颗粒球产生方法。本文提出了一种使用该划分加速粒度球的方法来代替$ k $ -means。它可以大大提高颗粒球生成的效率,同时确保与现有方法类似的准确性。此外,考虑粒子球的重叠消除和一些其他因素,提出了一种新的颗粒球生成的新自适应方法。这使得在真实意义上的无参数和完全自适应的颗粒球生成过程。此外,本文首先为颗粒球覆盖物提供了数学模型。一些真实数据集的实验结果表明,所提出的两个颗粒球生成方法具有与现有方法相似的准确性,而实现适应性或加速度。
translated by 谷歌翻译
Pawlak粗糙集和邻居粗糙集是两个最常见的粗糙设置理论模型。 Pawlawk可以使用等价类来表示知识,但无法处理连续数据;邻域粗糙集可以处理连续数据,但它失去了使用等价类代表知识的能力。为此,本文介绍了基于格兰拉球计算的粒状粗糙集。颗粒球粗糙集可以同时代表佩皮克粗集,以及邻域粗糙集,以实现两者的统一表示。这使得粒度球粗糙集不仅可以处理连续数据,而且可以使用对知识表示的等价类。此外,我们提出了一种颗粒球粗糙集的实现算法。基准数据集的实验符合证明,由于颗粒球计算的鲁棒性和适应性的组合,与Pawlak粗糙集和传统的邻居粗糙相比,粒状球粗糙集的学习准确性得到了大大提高放。颗粒球粗糙集也优于九流行或最先进的特征选择方法。
translated by 谷歌翻译
本文提出了一种基于粗糙集的强大数据挖掘方法,可以同时实现特征选择,分类和知识表示。粗糙集具有良好的解释性,是一种流行的特征选择方法。但效率低,精度低是其主要缺点,限制了其应用能力。在本文中,对应于准确性,首先找到粗糙集的无效,因为过度装备,尤其是在处理噪声属性中,并为属性提出了一个稳健的测量,称为相对重要性。我们提出了“粗糙概念树”的概念用于知识表示和分类。在公共基准数据集上的实验结果表明,所提出的框架达到比七种流行或最先进的特征选择方法更高的精度。
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Recent studies have shown that using an external Language Model (LM) benefits the end-to-end Automatic Speech Recognition (ASR). However, predicting tokens that appear less frequently in the training set is still quite challenging. The long-tail prediction problems have been widely studied in many applications, but only been addressed by a few studies for ASR and LMs. In this paper, we propose a new memory augmented lookup dictionary based Transformer architecture for LM. The newly introduced lookup dictionary incorporates rich contextual information in training set, which is vital to correctly predict long-tail tokens. With intensive experiments on Chinese and English data sets, our proposed method is proved to outperform the baseline Transformer LM by a great margin on both word/character error rate and tail tokens error rate. This is achieved without impact on the decoding efficiency. Overall, we demonstrate the effectiveness of our proposed method in boosting the ASR decoding performance, especially for long-tail tokens.
translated by 谷歌翻译
Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译
This paper presents a novel framework for planning in unknown and occluded urban spaces. We specifically focus on turns and intersections where occlusions significantly impact navigability. Our approach uses an inpainting model to fill in a sparse, occluded, semantic lidar point cloud and plans dynamically feasible paths for a vehicle to traverse through the open and inpainted spaces. We demonstrate our approach using a car's lidar data with real-time occlusions, and show that by inpainting occluded areas, we can plan longer paths, with more turn options compared to without inpainting; in addition, our approach more closely follows paths derived from a planner with no occlusions (called the ground truth) compared to other state of the art approaches.
translated by 谷歌翻译
In this work, we investigate improving the generalizability of GAN-generated image detectors by performing data augmentation in the fingerprint domain. Specifically, we first separate the fingerprints and contents of the GAN-generated images using an autoencoder based GAN fingerprint extractor, followed by random perturbations of the fingerprints. Then the original fingerprints are substituted with the perturbed fingerprints and added to the original contents, to produce images that are visually invariant but with distinct fingerprints. The perturbed images can successfully imitate images generated by different GANs to improve the generalization of the detectors, which is demonstrated by the spectra visualization. To our knowledge, we are the first to conduct data augmentation in the fingerprint domain. Our work explores a novel prospect that is distinct from previous works on spatial and frequency domain augmentation. Extensive cross-GAN experiments demonstrate the effectiveness of our method compared to the state-of-the-art methods in detecting fake images generated by unknown GANs.
translated by 谷歌翻译