The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
We propose LiDAL, a novel active learning method for 3D LiDAR semantic segmentation by exploiting inter-frame uncertainty among LiDAR frames. Our core idea is that a well-trained model should generate robust results irrespective of viewpoints for scene scanning and thus the inconsistencies in model predictions across frames provide a very reliable measure of uncertainty for active sample selection. To implement this uncertainty measure, we introduce new inter-frame divergence and entropy formulations, which serve as the metrics for active selection. Moreover, we demonstrate additional performance gains by predicting and incorporating pseudo-labels, which are also selected using the proposed inter-frame uncertainty measure. Experimental results validate the effectiveness of LiDAL: we achieve 95% of the performance of fully supervised learning with less than 5% of annotations on the SemanticKITTI and nuScenes datasets, outperforming state-of-the-art active learning methods. Code release: https://github.com/hzykent/LiDAL.
translated by 谷歌翻译
从X射线冠状动脉造影(XCA)图像序列中提取对比度的血管对于直觉诊断和治疗具有重要的临床意义。在这项研究中,XCA图像序列O被认为是三维张量输入,血管层H是稀疏张量,而背景层B是低级别张量。使用张量核标准(TNN)最小化,提出了一种基于张量的强稳定主成分分析(TRPCA)的新型血管层提取方法。此外,考虑了血管的不规则运动和周围无关组织的动态干扰,引入了总变化(TV)正规化时空约束,以分离动态背景E。 - 阶段区域生长(TSRG)方法用于血管增强和分割。全局阈值分割用作获得主分支的预处理,并使用ra样特征(RLF)滤波器来增强和连接破碎的小段,最终的容器掩模是通过结合两个中间结果来构建的。我们评估了TV-TRPCA算法的前景提取的可见性以及TSRG算法在真实临床XCA图像序列和第三方数据库上的血管分割的准确性。定性和定量结果都验证了所提出的方法比现有的最新方法的优越性。
translated by 谷歌翻译
自我监督的对比表示学习提供了从未标记的医学数据集中学习有意义的视觉表示的优势,以进行转移学习。但是,将当前的对比度学习方法应用于医疗数据而不考虑其特定区域的解剖学特征可能会导致视觉表示,这些视觉表示在外观和语义上是不一致的。在本文中,我们建议通过解剖学对比度学习(AWCL)改善医学图像的视觉表示,该学习结合了解剖学信息,以以对比度学习方式增强正/阴性对采样。为自动化的胎儿超声成像任务展示了所提出的方法,从而使从解剖学上相似的相同或不同的超声扫描实现了正对,这些扫描在解剖学上相似,可以将其拉在一起,从而改善了表示的学习。我们从经验上研究了与粗粒和细粒度的粒度纳入解剖信息的效果,以进行对比学习,并发现使用细粒度的解剖学信息的学习能够保留阶层内差异比其对应物更有效。我们还分析了解剖比对我们的AWCL框架的影响,发现使用更独特但解剖学上的样品构成阳性对的影响会带来更好的质量表示。大规模胎儿超声数据集的实验表明,我们的方法对学习表征有效,可以很好地转移到三个临床下游任务,并且与受监督的Imagenet和当前的先进对比度学习方法相比,取得了优越的性能。特别是,在跨域分割任务上,AWCL的表现优于Imagenet监督方法,高于13.8%,基于最先进的对比度方法的方法为7.1%。
translated by 谷歌翻译
近年来,由于强大的3D CNN,基于体素的方法已成为室内场景3D语义分割的最新方法。然而,基于体素的方法忽略了基础的几何形状,由于缺乏地理位置信息而在空间上闭合物体上的模棱两可的特征遭受了含糊的特征,并努力处理复杂和不规则的几何形状。鉴于此,我们提出了Voxel-Mesh网络(VMNET),这是一种新颖的3D深度体系结构,该架构在Voxel和网格表示上运行,并利用了欧几里得和地球信息。从直觉上讲,从体素中提取的欧几里得信息可以提供代表附近对象之间交互的上下文提示,而从网格中提取的地理信息可以帮助空间上接近但断开表面的分离对象。为了合并两个域中的此类信息,我们设计了一个内域的专注模块,以进行有效的特征聚集和一个用于自适应特征融合的专注于域间的模块。实验结果验证了VMNET的有效性:具体而言,在具有挑战性的扫描仪数据集上,用于大规模的室内场景分割,它的表现优于最先进的Sparseconvnet和Minkowskownet(74.6%vs 72.5%和73.6%)更简单的网络结构(17m vs 30m和38m参数)。代码发布:https://github.com/hzykent/vmnet
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译