In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
最近的研究表明,即使在攻击者无法访问模型信息的黑匣子场景中,基于深模型的检测器也容易受到对抗示例的影响。大多数现有的攻击方法旨在最大程度地减少真正的积极速率,这通常显示出较差的攻击性能,因为在受攻击的边界框中可以检测到另一个最佳的边界框成为新的真实积极的框架。为了解决这一挑战,我们建议最大程度地降低真实的正速率并最大化误报率,这可以鼓励更多的假阳性对象阻止新的真实正面边界框的产生。它被建模为多目标优化(MOP)问题,通用算法可以搜索帕累托最佳选择。但是,我们的任务具有超过200万个决策变量,导致搜索效率较低。因此,我们将标准的遗传算法扩展到了随机子集选择和称为GARSDC的分裂和矛盾,从而显着提高了效率。此外,为了减轻通用算法中人口质量的敏感性,我们利用具有相似骨架的不同检测器之间的可转移性产生了梯度优先人口。与最先进的攻击方法相比,GARSDC在地图中平均减少12.0,在广泛的实验中查询约1000倍。我们的代码可以在https://github.com/liangsiyuan21/ garsdc找到。
translated by 谷歌翻译
随着处理点云数据中深度学习的繁荣,最近的作品表明,后门攻击对3D视觉应用构成了严重的安全威胁。攻击者通过用触发器中毒一些训练样本将后门注射到3D模型中,从而使后门模型在干净的样品上表现良好,但在出现扳机模式时会恶意行为。现有的攻击通常将一些附加点插入点云中,或使用线性转换(例如旋转)来构建中毒点云。但是,这些中毒样品的影响可能会被某些常用的3D点云的常用预处理技术削弱,甚至可以消除,例如,离群的去除或旋转增强。在本文中,我们提出了一种新颖的觉得不可察觉,强大的后门攻击(IRBA)来应对这一挑战。我们利用一种称为加权局部变换(WLT)的非线性和局部变换来构建具有独特转换的中毒样品。由于WLT中有几种超参数和随机性,因此很难产生两个类似的转换。因此,具有独特转化的中毒样品可能对上述预处理技术有抵抗力。此外,由于由固定的WLT引起的失真的可控性和平滑度,因此生成的中毒样品也无法察觉到人类检查。在三个基准数据集和四个模型上进行的广泛实验表明,即使使用预处理技术,IRBA在大多数情况下都可以达到80%+ ASR,这显着高于以前的最新攻击。
translated by 谷歌翻译
快速对抗训练(脂肪)有效地提高了标准对抗训练(SAT)的效率。然而,初始脂肪遇到灾难性的过度拟合,即,对抗性攻击的稳健精度突然并大大减少。尽管有几种脂肪变体毫不费力地防止过度拟合,但他们牺牲了很多计算成本。在本文中,我们探讨了SAT和FAT的训练过程之间的差异,并观察到,对抗性实例(AES)脂肪的攻击成功率在后期训练阶段逐渐变得更糟,从而导致过度拟合。 AE是通过零或随机初始化的快速梯度标志方法(FGSM)生成的。根据观察结果,我们提出了一种先前的FGSM初始化方法,以避免在研究多种初始化策略后避免过度适应,从而在整个训练过程中提高了AE的质量。初始化是通过利用历史上生成的AE而没有额外计算成本而形成的。我们进一步为提出的初始化方法提供了理论分析。我们还基于先前的初始化,即当前生成的扰动不应过多地偏离先前引导的初始化,因此我们还提出了一个简单而有效的正规化程序。正常化器同时采用历史和当前的对抗性扰动来指导模型学习。在四个数据集上进行的评估表明,所提出的方法可以防止灾难性过度拟合和优于最先进的脂肪方法。该代码在https://github.com/jiaxiaojunqaq/fgsm-pgi上发布。
translated by 谷歌翻译
整数编程(IP)是一个重要且具有挑战性的问题。近似方法在解决IP问题的有效性和效率方面表现出了有希望的性能。但是,我们观察到,在很长的迭代中,通过某些迭代近似方法求解的大量变量在其最终收敛的离散状态下波动。受这一观察结果的启发,我们的目标是通过将这些波动变量固定到其融合状态,同时并没有显着损害溶液的准确性来加速这些近似方法。为此,我们提出了一个早期的固定框架以及近似方法。我们将整个早期修复过程作为马尔可夫决策过程,并使用模仿学习进行训练。策略网络将评估每个自由变量在每个迭代块中的离散候选状态的后验概率。具体来说,我们在政策网络中采用强大的多头注意机制。对我们提出的早期修复框架进行了广泛的实验,进行了三种不同的IP应用:约束线性编程,MRF能量最小化和稀疏的对抗性攻击。前者是线性IP问题,而后两个是二次IP问题。我们将问题量表从常规尺寸扩展到显着尺寸。广泛的实验揭示了我们早期修复框架的竞争力:运行时的速度大大提高,而解决方案质量并没有太大降解,即使在某些情况下,也可以获得更好的解决方案。我们提出的早期修复框架可以被视为用于解决整数编程的ADMM方法的加速扩展。源代码可在\ url {https://github.com/sclbd/accelerated-lpbox-admm}中获得。
translated by 谷歌翻译
后门学习是研究深神经网络(DNNS)脆弱性的一个新兴而重要的话题。在快速武器竞赛的地位上,正在连续或同时提出许多开创性的后门攻击和防御方法。但是,我们发现对新方法的评估通常是不可思议的,以验证其主张和实际绩效,这主要是由于快速发展,不同的环境以及实施和可重复性的困难。没有彻底的评估和比较,很难跟踪当前的进度并设计文献的未来发展路线图。为了减轻这一困境,我们建立了一个名为Backdoorbench的后门学习的全面基准。它由一个可扩展的基于模块化的代码库(当前包括8个最先进(SOTA)攻击和9种SOTA防御算法的实现),以及完整的后门学习的标准化协议。我们还基于5个模型和4个数据集,对9个防御措施的每对8次攻击进行全面评估,总共8,000对评估。我们从不同的角度进一步介绍了对这8,000次评估的不同角度,研究了对国防算法,中毒比率,模型和数据集对后门学习的影响。 \ url {https://backdoorbench.github.io}公开获得了Backdoorbench的所有代码和评估。
translated by 谷歌翻译
对抗性训练(AT)已被证明可以通过利用对抗性示例进行训练来有效地改善模型鲁棒性。但是,大多数方法面对昂贵的时间和计算成本,用于在生成对抗性示例的多个步骤中计算梯度。为了提高训练效率,快速梯度符号方法(FGSM)在方法中仅通过计算一次来快速地采用。不幸的是,鲁棒性远非令人满意。初始化的方式可能引起一个原因。现有的快速在通常使用随机的样本不合时宜的初始化,这促进了效率,但会阻碍进一步的稳健性改善。到目前为止,快速AT中的初始化仍未广泛探索。在本文中,我们以样本依赖性的对抗初始化(即,来自良性图像条件的生成网络的输出及其来自目标网络的梯度信息的输出)快速增强。随着生成网络和目标网络在训练阶段共同优化,前者可以适应相对于后者的有效初始化,从而激发了逐渐改善鲁棒性。在四个基准数据库上进行的实验评估证明了我们所提出的方法比在方法上快速的最先进方法的优越性,以及与方法相当的鲁棒性。该代码在https://github.com//jiaxiaojunqaq//fgsm-sdi上发布。
translated by 谷歌翻译
In this work, we propose TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions. The proposed method consists of three components: StyleGAN inversion module, visual-linguistic similarity learning, and instance-level optimization. The inversion module maps real images to the latent space of a well-trained StyleGAN. The visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space. The instancelevel optimization is for identity preservation in manipulation. Our model can produce diverse and high-quality images with an unprecedented resolution at 1024 2 . Using a control mechanism based on style-mixing, our Tedi-GAN inherently supports image synthesis with multi-modal inputs, such as sketches or semantic labels, with or without instance guidance. To facilitate text-guided multimodal synthesis, we propose the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions. Extensive experiments on the introduced dataset demonstrate the superior performance of our proposed method. Code and data are available at https://github.com/weihaox/TediGAN.
translated by 谷歌翻译
In this paper, we introduce a simple and novel framework for one-shot audio-driven talking head generation. Unlike prior works that require additional driving sources for controlled synthesis in a deterministic manner, we instead probabilistically sample all the holistic lip-irrelevant facial motions (i.e. pose, expression, blink, gaze, etc.) to semantically match the input audio while still maintaining both the photo-realism of audio-lip synchronization and the overall naturalness. This is achieved by our newly proposed audio-to-visual diffusion prior trained on top of the mapping between audio and disentangled non-lip facial representations. Thanks to the probabilistic nature of the diffusion prior, one big advantage of our framework is it can synthesize diverse facial motion sequences given the same audio clip, which is quite user-friendly for many real applications. Through comprehensive evaluations on public benchmarks, we conclude that (1) our diffusion prior outperforms auto-regressive prior significantly on almost all the concerned metrics; (2) our overall system is competitive with prior works in terms of audio-lip synchronization but can effectively sample rich and natural-looking lip-irrelevant facial motions while still semantically harmonized with the audio input.
translated by 谷歌翻译
Generating controllable and editable human motion sequences is a key challenge in 3D Avatar generation. It has been labor-intensive to generate and animate human motion for a long time until learning-based approaches have been developed and applied recently. However, these approaches are still task-specific or modality-specific\cite {ahuja2019language2pose}\cite{ghosh2021synthesis}\cite{ferreira2021learning}\cite{li2021ai}. In this paper, we propose ``UDE", the first unified driving engine that enables generating human motion sequences from natural language or audio sequences (see Fig.~\ref{fig:teaser}). Specifically, UDE consists of the following key components: 1) a motion quantization module based on VQVAE that represents continuous motion sequence as discrete latent code\cite{van2017neural}, 2) a modality-agnostic transformer encoder\cite{vaswani2017attention} that learns to map modality-aware driving signals to a joint space, and 3) a unified token transformer (GPT-like\cite{radford2019language}) network to predict the quantized latent code index in an auto-regressive manner. 4) a diffusion motion decoder that takes as input the motion tokens and decodes them into motion sequences with high diversity. We evaluate our method on HumanML3D\cite{Guo_2022_CVPR} and AIST++\cite{li2021learn} benchmarks, and the experiment results demonstrate our method achieves state-of-the-art performance. Project website: \url{https://github.com/zixiangzhou916/UDE/
translated by 谷歌翻译