最近的进步表明,深度神经网络(DNN)容易受到对抗性扰动的影响。因此,有必要使用对抗攻击评估高级DNN的鲁棒性。但是,将使用贴纸作为扰动的传统物理攻击比最近的基于光的物理攻击更容易受到伤害。在这项工作中,我们提出了一种基于投影仪的物理攻击,称为“对抗颜色投影(ADVCP)”,该攻击通过操纵投影光的物理参数来进行对抗攻击。实验显示了我们方法在数字和物理环境中的有效性。实验结果表明,所提出的方法具有出色的攻击传递性,它赋予了Advcp有效的BlackBox攻击。我们向ADVCP提出威胁,威胁到未来的基于视觉的系统和应用程序,并提出一些基于轻型物理攻击的想法。
translated by 谷歌翻译
众所周知,深神经网络(DNN)的性能容易受到微妙的干扰。到目前为止,基于摄像机的身体对抗攻击还没有引起太多关注,但这是物理攻击的空缺。在本文中,我们提出了一种简单有效的基于相机的物理攻击,称为“对抗彩色膜”(ADVCF),该攻击操纵了彩色膜的物理参数以执行攻击。精心设计的实验显示了所提出的方法在数字和物理环境中的有效性。此外,实验结果表明,ADVCF生成的对抗样本在攻击转移性方面具有出色的性能,这可以使ADVCF有效的黑盒攻击。同时,我们通过对抗训练给予对ADVCF的防御指导。最后,我们调查了ADVCF对基于视觉的系统的威胁,并为基于摄像机的物理攻击提出了一些有希望的心态。
translated by 谷歌翻译
深度神经网络(DNN)已被广泛用于计算机视觉任务,例如图像分类,对象检测和分割。尽管最近的研究表明它们易受输入图像中手动数字扰动或失真的脆弱性。网络的准确性受到培训数据集的数据分布的极大影响。缩放原始图像会创建分布数据,这使其成为欺骗网络的对抗性攻击。在这项工作中,我们通过通过不同的倍数将ImageNet挑战数据集的子集缩放出一个子集,从而提出了一个缩放分数数据集Imagenet-C。我们工作的目的是研究缩放图像对高级DNN的性能的影响。我们对所提出的Imagenet-CS进行了几个最新的深神网络体系结构进行实验,结果显示缩放大小和准确性下降之间存在显着的正相关。此外,根据RESNET50体系结构,我们展示了一些关于最近提出的强大训练技术和策略(例如Augmix,Revisiting and Ranstorize of Al Of Awmiting and Normorizer of Un Imagenet-cs)的测试。实验结果表明,这些强大的训练技术可以改善网络对缩放转换的鲁棒性。
translated by 谷歌翻译
尽管已知深度神经网络(DNN)很脆弱,但没有人研究了物理世界中图像对DNNS性能的缩放和缩放的影响。在本文中,我们演示了一种新型的物理对抗攻击技术,称为“对抗变焦镜头(Advzl)”,该技术使用变焦镜头放大了物理世界的图片,欺骗了DNN,而无需更改目标对象的特征。迄今为止,提出的方法是唯一不添加物理对抗扰动攻击DNN的对抗性攻击技术。在数字环境中,我们构建了一个基于Advzl的数据集,以验证相等规模的扩大图像对DNN的拮抗作用。在物理环境中,我们操纵变焦镜头以放大目标对象,并生成对抗样本。实验结果证明了Advzl在数字和物理环境中的有效性。我们进一步分析了提出的数据集与改进的DNN的拮抗作用。另一方面,我们通过对抗训练提供了针对Advzl的防御指南。最后,我们研究了提出的未来自动驾驶和变体攻击思想的威胁可能性,类似于拟议的攻击。
translated by 谷歌翻译
这项工作解决了中央机器学习问题的问题,即在分布(OOD)测试集上的性能降解问题。这个问题在基于医学成像的诊断系统中尤为明显,该系统似乎是准确的,但在新医院/数据集中进行测试时失败。最近的研究表明,该系统可能会学习快捷方式和非相关功能,而不是可推广的功能,即所谓的良好功能。我们假设对抗性训练可以消除快捷方式功能,而显着性训练可以滤除非相关功能。两者都是OOD测试集的性能降解的滋扰功能。因此,我们为深度神经网络制定了一种新颖的模型培训方案,以学习分类和/或检测任务的良好功能,以确保在OOD测试集上的概括性性能。实验结果定性和定量证明了我们使用基准CXR图像数据集在分类任务上的基准CXR图像数据集的出色性能。
translated by 谷歌翻译
图像检索已成为一种越来越有吸引力的技术,具有广泛的多媒体应用前景,在该技术中,深层哈希是朝着低存储和有效检索的主要分支。在本文中,我们对深度学习中的度量学习进行了深入的研究,以在多标签场景中建立强大的度量空间,在多标签场景中,两人的损失遭受了高度计算的开销和汇聚难度,而代理损失理论上是无法表达的。深刻的标签依赖性和在构造的超球场空间中表现出冲突。为了解决这些问题,我们提出了一个新颖的度量学习框架,该框架具有混合代理损失(hyt $^2 $损失),该框架构建了具有高效训练复杂性W.R.T.的表现力度量空间。整个数据集。拟议的催眠$^2 $损失着重于通过可学习的代理和发掘无关的数据与数据相关性来优化超晶体空间,这整合了基于成对方法的足够数据对应关系以及基于代理方法的高效效率。在四个标准的多标签基准上进行的广泛实验证明,所提出的方法优于最先进的方法,在不同的哈希片中具有强大的功能,并且以更快,更稳定的收敛速度实现了显着的性能增长。我们的代码可从https://github.com/jerryxu0129/hyp2-loss获得。
translated by 谷歌翻译
深度散列在大规模图像检索中显示了有希望的性能。然而,由\ textBF {d} EEP \ TextBF {n} EETURT \ TextBF {n} etwork(DNN)提取的潜在代码将在二值化过程中不可避免地丢失语义信息,这损害了检索效率并使其充满挑战。虽然许多现有方法进行正规化以缓解量化错误,但我们弄清楚了度量和量化损耗之间的不兼容冲突。公制损失惩罚了阶级距离,以推动远处的不受约束的不同类别。更糟糕的是,它倾向于映射潜在的代码偏离理想的二值化点,并在二值化过程中产生严重的模糊性。基于二进制线性代码的最小距离,提出了提出基于二进制线性代码的最小距离,\ textbf {h}灰色引导\ textbf {h} Inge \ textbf {f}发射(hhf)以避免这种冲突。详细说明,我们仔细设计了一个特定的拐点,依赖于散列长度和类别号来平衡度量学习和量化学习。这种修改可防止网络落入深度散列中的局部度量最佳最小值。在CiFAR-10,CIFAR-100,ImageNet和MS-Coco中的广泛实验表明,HHF始终如一地优于现有技术,并且将其移植到其他方法中是坚固且柔韧的。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译