深度神经网络(DNN)的安全性因其在各种应用中的广泛使用而引起了人们的关注。最近,已被部署的DNN被证明容易受到特洛伊木马攻击的影响,该攻击操纵模型参数,以钻头翻转以注入隐藏的行为并通过特定的触发模式激活它。但是,所有现有的特洛伊木马攻击都采用了明显的基于补丁的触发器(例如,正方形模式),使其对人类可感知,并且很容易被机器发现。在本文中,我们提出了一种新颖的攻击,即几乎不可感知的特洛伊木马攻击(HPT)。 HPT通过利用添加噪声和每个像素流场来分别调整原始图像的像素值和位置,几乎无法察觉到可感知的特洛伊木马图像。为了实现卓越的攻击性能,我们建议共同优化位挡板,加性噪声和流场。由于DNN的重量位是二进制的,因此很难解决此问题。我们通过等效替换处理二进制约束,并提供有效的优化算法。关于CIFAR-10,SVHN和Imagenet数据集的广泛实验表明,所提出的HPT可以生成几乎不可感知的特洛伊木马图像,同时与先进的方法相比实现了可比或更好的攻击性能。该代码可在以下网址获得:https://github.com/jiawangbai/hpt。
translated by 谷歌翻译
基于决策攻击对现实世界应用程序构成严重威胁,因为它将目标模型视为黑盒子,并且仅访问硬预测标签。最近已经努力减少查询的数量;然而,现有的基于决策攻击仍需要数千个疑问以产生良好的质量的对抗性示例。在这项工作中,我们发现一个良性样本,当前和下一个逆势示例可以自然地构建子空间中的三角形以获得任何迭代攻击。基于诸如SINES的规律,我们提出了一种新颖的三角形攻击(TA)来通过利用较长侧总是与任何三角形的较大角度相对的几何信息来优化扰动。然而,直接在输入图像上施加这样的信息是无效的,因为它不能彻底探索高维空间中输入样本的邻域。为了解决这个问题,TA优化低频空间中的扰动,以获得由于此类几何特性的一般性而有效减少。对ImageNet DataSet的广泛评估表明,TA在1,000个查询中实现了更高的攻击成功率,并且需要更少的查询,以在各种扰动预算下实现相同的攻击成功率,而不是现有的基于决策攻击。具有如此高的效率,我们进一步展示了TA在真实世界API上的适用性,即腾讯云API。
translated by 谷歌翻译
随着最近深度卷积神经网络的进步,一般面临的概念取得了重大进展。然而,最先进的一般面部识别模型对遮挡面部图像没有概括,这正是现实世界场景中的常见情况。潜在原因是用于训练和特定设计的大规模遮挡面部数据,用于解决闭塞所带来的损坏功能。本文提出了一种新颖的面部识别方法,其基于单端到端的深神经网络的闭塞是强大的。我们的方法(使用遮挡掩码)命名(面部识别),学会发现深度卷积神经网络的损坏功能,并通过动态学习的面具清洁它们。此外,我们构建了大规模的遮挡面部图像,从有效且有效地培训。与现有方法相比,依靠外部探测器发现遮挡或采用较少鉴别的浅模型的现有方法,从简单且功能强大。 LFW,Megaface挑战1,RMF2,AR数据集和其他模拟遮挡/掩蔽数据集的实验结果证实,从大幅提高了遮挡下的准确性,并概括了一般面部识别。
translated by 谷歌翻译
随着近期神经网络的成功,对人脸识别取得了显着进展。然而,收集面部识别的大规模现实世界培训数据已经挑战,特别是由于标签噪音和隐私问题。同时,通常从网络图像收集现有的面部识别数据集,缺乏关于属性的详细注释(例如,姿势和表达),因此对面部识别的不同属性的影响已经很差。在本文中,我们使用合成面部图像,即Synface来解决面部识别中的上述问题。具体而言,我们首先探讨用合成和真实面部图像训练的最近最先进的人脸识别模型之间的性能差距。然后,我们分析了性能差距背后的潜在原因,例如,较差的阶级变化和合成和真实面部图像之间的域间隙。灵感来自于此,我们使用身份混合(IM)和域混合(DM)设计了SYNFACE,以减轻上述性能差距,展示了对面部识别的综合数据的巨大潜力。此外,利用可控的面部合成模型,我们可以容易地管理合成面代的不同因素,包括姿势,表达,照明,身份的数量和每个身份的样本。因此,我们还对综合性面部图像进行系统实证分析,以提供一些关于如何有效利用综合数据进行人脸识别的见解。
translated by 谷歌翻译
Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L 2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.
translated by 谷歌翻译
Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.
translated by 谷歌翻译
We study the problem of semantic segmentation calibration. For image classification, lots of existing solutions are proposed to alleviate model miscalibration of confidence. However, to date, confidence calibration research on semantic segmentation is still limited. We provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach. First, we find that model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration. Among them, prediction correctness, especially misprediction, is more important to miscalibration due to over-confidence. Next, we propose a simple, unifying, and effective approach, namely selective scaling, by separating correct/incorrect prediction for scaling and more focusing on misprediction logit smoothing. Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration. We conduct extensive experiments with a variety of benchmarks on both in-domain and domain-shift calibration, and show that selective scaling consistently outperforms other methods.
translated by 谷歌翻译
In this paper, we propose a large-scale language pre-training for text GENeration using dIffusion modEl, which is named GENIE. GENIE is a pre-training sequence-to-sequence text generation model which combines Transformer and diffusion. The diffusion model accepts the latent information from the encoder, which is used to guide the denoising of the current time step. After multiple such denoise iterations, the diffusion model can restore the Gaussian noise to the diverse output text which is controlled by the input text. Moreover, such architecture design also allows us to adopt large scale pre-training on the GENIE. We propose a novel pre-training method named continuous paragraph denoise based on the characteristics of the diffusion model. Extensive experiments on the XSum, CNN/DailyMail, and Gigaword benchmarks shows that GENIE can achieves comparable performance with various strong baselines, especially after pre-training, the generation quality of GENIE is greatly improved. We have also conduct a lot of experiments on the generation diversity and parameter impact of GENIE. The code for GENIE will be made publicly available.
translated by 谷歌翻译
Developing autonomous vehicles (AVs) helps improve the road safety and traffic efficiency of intelligent transportation systems (ITS). Accurately predicting the trajectories of traffic participants is essential to the decision-making and motion planning of AVs in interactive scenarios. Recently, learning-based trajectory predictors have shown state-of-the-art performance in highway or urban areas. However, most existing learning-based models trained with fixed datasets may perform poorly in continuously changing scenarios. Specifically, they may not perform well in learned scenarios after learning the new one. This phenomenon is called "catastrophic forgetting". Few studies investigate trajectory predictions in continuous scenarios, where catastrophic forgetting may happen. To handle this problem, first, a novel continual learning (CL) approach for vehicle trajectory prediction is proposed in this paper. Then, inspired by brain science, a dynamic memory mechanism is developed by utilizing the measurement of traffic divergence between scenarios, which balances the performance and training efficiency of the proposed CL approach. Finally, datasets collected from different locations are used to design continual training and testing methods in experiments. Experimental results show that the proposed approach achieves consistently high prediction accuracy in continuous scenarios without re-training, which mitigates catastrophic forgetting compared to non-CL approaches. The implementation of the proposed approach is publicly available at https://github.com/BIT-Jack/D-GSM
translated by 谷歌翻译