分类器在实践中已被广泛实施,而如何正确评估它们仍然是一个问题。通常,基于混淆矩阵和损失函数分别使用两种类型的指标在灵活性和数学完整性方面具有不同的优势,而它们在不同的困境中挣扎,例如对轻微改进或在不同任务中缺乏可定制性的不敏感性。在本文中,我们提出了一个基于概率预测的抽象表示,以及用于处理多分类中负面类别的目标设计的新颖指标,称为Meta模式关注得分,并降低了度量的离散性,以实现的优势,以实现两种指标都避免了它们的弱点。我们的指标提供了定制性,可以在不同实践中选择特定要求的模型,并确保在传统指标下也可以很好地选择模型。四种模型和六个数据集的评估证明了我们的度量的有效性和效率,案例研究表明,它可以选择模型来减少0.53%的危险错误分类,仅牺牲0.04%的培训准确性。
translated by 谷歌翻译
深神经网络(DNN)分类器容易受到对抗攻击的影响。尽管现有的基于梯度的攻击在馈送前向模型和图像识别任务中取得了良好的性能,但在复发性神经网络(RNN)中的时间序列分类的扩展仍然是一个困境,因为RNN的周期性结构可防止直接模型差异化和模型差异化和对时间序列数据扰动的视觉敏感性挑战了传统的局部优化目标,以最大程度地减少扰动。在本文中,提出了一种有效且广泛的方法,称为TSFool,用于为RNN分类器制定高质量的对抗时间序列。我们提出了一个名为“伪装系数”的新型全球优化目标,以考虑对抗样本在类群中的掩盖程度,因此将高质量的对抗性攻击重新定义为多目标优化问题。我们还提出了一个新的想法,以使用干预的加权有限自动机(IWFA)捕获具有其他特征和潜在歧管之间具有其他性的深层嵌入式脆弱样本,以指导优化解决方案的近似值。进行了22个UCR数据集的实验,以确认TSFool是一种广泛有效,有效且高质量的方法,局部扰动减少了93.22%,全局伪装更好32.33%,对现有方法的加速速度为1.12倍。
translated by 谷歌翻译
基于信息瓶颈(IB)的多视图学习提供了一种信息理论原则,用于寻找异质数据描述中包含的共享信息。但是,它的巨大成功通常归因于估计网络变得复杂时棘手的多元互助信息。此外,表示折衷的表示,{\ it},预测压缩和足够的一致性权衡,使IB难以同时满足这两个要求。在本文中,我们设计了几种变分信息瓶颈,以利用两个关键特征({\ it,即},充分性和一致性)用于多视图表示学习。具体而言,我们提出了一种多视图变量蒸馏(MV $^2 $ d)策略,以通过给出观点的任意输入,但没有明确估算它,从而为拟合MI提供了可扩展,灵活和分析的解决方案。在严格的理论保证下,我们的方法使IB能够掌握观测和语义标签之间的内在相关性,从而自然产生预测性和紧凑的表示。同样,我们的信息理论约束可以通过消除任务 - 求核和特定信息的信息来有效地中和对异质数据的敏感性,从而阻止在多种视图情况下两种权衡。为了验证理论上的策略,我们将方法应用于三种不同应用下的各种基准。广泛的定量和定性实验证明了我们对最新方法的方法的有效性。
translated by 谷歌翻译
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations. Intuitively, weakly supervised training is a direct solution to reduce the cost of labeling. However, for weakly supervised large-scale point cloud semantic segmentation, too few annotations will inevitably lead to ineffective learning of network. We propose an effective weakly supervised method containing two components to solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,} point cloud colorization, with a self-supervised learning to transfer the learned prior knowledge from a large amount of unlabeled point cloud to a weakly supervised network. In this way, the representation capability of the weakly supervised network can be improved by the guidance from a heterogeneous task. Besides, to generate pseudo label for unlabeled data, a sparse label propagation mechanism is proposed with the help of generated class prototypes, which is used to measure the classification confidence of unlabeled point. Our method is evaluated on large-scale point cloud datasets with different scenarios including indoor and outdoor. The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods\footnote{Code based on mindspore: https://github.com/dmcv-ecnu/MindSpore\_ModelZoo/tree/main/WS3\_MindSpore}.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
弱监督的点云语义分割方法需要1 \%或更少的标签,希望实现与完全监督的方法几乎相同的性能,这些方法最近引起了广泛的研究关注。该框架中的一个典型解决方案是使用自我训练或伪标记来从点云本身挖掘监督,但忽略了图像中的关键信息。实际上,在激光雷达场景中广泛存在相机,而这种互补信息对于3D应用似乎非常重要。在本文中,我们提出了一种用于3D分割的新型交叉模式弱监督的方法,并结合了来自未标记图像的互补信息。基本上,我们设计了一个配备有效标签策略的双分支网络,以最大程度地发挥标签的力量,并直接实现2D到3D知识转移。之后,我们以期望最大(EM)的视角建立了一个跨模式的自我训练框架,该框架在伪标签估计和更新参数之间进行了迭代。在M-Step中,我们提出了一个跨模式关联学习,通过增强3D点和2D超级像素之间的周期矛盾性,从图像中挖掘互补的监督。在E-Step中,伪标签的自我校准机制被得出过滤噪声标签,从而为网络提供了更准确的标签,以进行全面训练。广泛的实验结果表明,我们的方法甚至优于最先进的竞争对手,而少于1 \%的主动选择注释。
translated by 谷歌翻译
The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy. Though IB principle has been applied to a wide range of applications, its optimization remains a challenging problem which heavily relies on the accurate estimation of mutual information. In this paper, we present a new strategy, Variational Self-Distillation (VSD), which provides a scalable, flexible and analytic solution to essentially fitting the mutual information but without explicitly estimating it. Under rigorously theoretical guarantee, VSD enables the IB to grasp the intrinsic correlation between representation and label for supervised training. Furthermore, by extending VSD to multi-view learning, we introduce two other strategies, Variational Cross-Distillation (VCD) and Variational Mutual-Learning (VML), which significantly improve the robustness of representation to view-changes by eliminating view-specific and task-irrelevant information. To verify our theoretically grounded strategies, we apply our approaches to cross-modal person Re-ID, and conduct extensive experiments, where the superior performance against state-of-the-art methods are demonstrated. Our intriguing findings highlight the need to rethink the way to estimate mutual
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译