Adversarial machine learning has been both a major concern and a hot topic recently, especially with the ubiquitous use of deep neural networks in the current landscape. Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time. On one hand, the goal is to develop strong and robust deep networks that are resistant to malicious actors. On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models. Most of existing attacks employs a single $\ell_p$ distance (commonly, $p\in\{1,2,\infty\}$) to define the concept of closeness and performs steepest gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial example in the same way. These $\ell_p$ attacks each has its own pros and cons; and there is no single attack that can successfully break through defense models that are robust against multiple $\ell_p$ norms simultaneously. Motivated by these observations, we come up with a natural approach: combining various $\ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation. Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples. Finally, through various experiments with standardized benchmarks, we show that our method outperforms most current strong attacks across state-of-the-art defense mechanisms, while retaining its ability to remain clean visually.
translated by 谷歌翻译
Pareto Front Learning (PFL) was recently introduced as an effective approach to obtain a mapping function from a given trade-off vector to a solution on the Pareto front, which solves the multi-objective optimization (MOO) problem. Due to the inherent trade-off between conflicting objectives, PFL offers a flexible approach in many scenarios in which the decision makers can not specify the preference of one Pareto solution over another, and must switch between them depending on the situation. However, existing PFL methods ignore the relationship between the solutions during the optimization process, which hinders the quality of the obtained front. To overcome this issue, we propose a novel PFL framework namely \ourmodel, which employs a hypernetwork to generate multiple solutions from a set of diverse trade-off preferences and enhance the quality of the Pareto front by maximizing the Hypervolume indicator defined by these solutions. The experimental results on several MOO machine learning tasks show that the proposed framework significantly outperforms the baselines in producing the trade-off Pareto front.
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
引入了模型对帐问题(MRP),以解决可解释的AI计划中的问题。 MRP的解决方案是对人与计划代理(机器人)模型之间差异的解释。解决MRP的大多数方法都认为,需要提供解释的机器人知道人类模型。在几种情况下,这个假设并不总是现实的(例如,人可能会决定更新她的模型,并且机器人不知道更新)。在本文中,我们提出了一种基于对话的方法,用于计算MRP的解释,即(i)机器人不知道人类模型; (ii)人类和机器人共享计划域的谓词及其交换是关于行动描述和流利的价值; (iii)双方之间的沟通是完美的; (iv)各方是真实的。 MRP解决方案是通过对话框计算的,该对话框定义为机器人和人之间的一系列交换序列。在每回合中,机器人向人类发送了一个潜在的解释,称为提案,她对提案的评估回答称为回应。我们开发了用于计算机器人和人类响应的算法,并将这些算法实现在将命令式手段与使用Clingo的多拍功能的答案集编程相结合的系统中。
translated by 谷歌翻译
当前的3D分割方法很大程度上依赖于大规模的点状数据集,众所周知,这些数据集众所周知。很少有尝试规避需要每点注释的需求。在这项工作中,我们研究了弱监督的3D语义实例分割。关键的想法是利用3D边界框标签,更容易,更快地注释。确实,我们表明只有仅使用边界框标签训练密集的分割模型。在我们方法的核心上,\ name {}是一个深层模型,灵感来自经典的霍夫投票,直接投票赞成边界框参数,并且是专门针对边界盒票的专门定制的群集方法。这超出了常用的中心票,这不会完全利用边界框注释。在扫描仪测试中,我们弱监督的模型在其他弱监督的方法中获得了领先的性能(+18 MAP@50)。值得注意的是,它还达到了当前完全监督模型的50分数的地图的97%。为了进一步说明我们的工作的实用性,我们在最近发布的Arkitscenes数据集中训练Box2mask,该数据集仅使用3D边界框注释,并首次显示引人注目的3D实例细分掩码。
translated by 谷歌翻译
由于GaN潜在空间的勘探和利用,近年来,现实世界的图像操纵实现了奇妙的进展。 GaN反演是该管道的第一步,旨在忠实地将真实图像映射到潜在代码。不幸的是,大多数现有的GaN反演方法都无法满足下面列出的三个要求中的至少一个:重建质量,可编辑性和快速推断。我们在本研究中提出了一种新的两阶段策略,同时适合所有要求。在第一阶段,我们训练编码器将输入图像映射到StyleGan2 $ \ Mathcal {W} $ - 空间,这被证明具有出色的可编辑性,但重建质量较低。在第二阶段,我们通过利用一系列HyperNetWorks来补充初始阶段的重建能力以在反转期间恢复缺失的信息。这两个步骤互相补充,由于Hypernetwork分支和由于$ \ Mathcal {W} $ - 空间中的反转,因此由于HyperNetwork分支和优异的可编辑性而相互作用。我们的方法完全是基于编码器的,导致极快的推断。关于两个具有挑战性的数据集的广泛实验证明了我们方法的优越性。
translated by 谷歌翻译
人脸识别是模式识别区域中非常重要的领域。它在军事和金融中有多种申请,名称为几个。在本文中,将提出与最近邻的方法的稀疏PCA的组合(以及与内核脊回归方法),并将应用于解决面部识别问题。实验结果表明,稀疏PCA方法的组合(使用近端梯度法和FISTA方法)和一个特定分类系统的准确性可能低于PCA方法和一个特定分类系统的组合的精度,但有时稀疏PCA方法的组合(使用近端梯度法或Fista方法)和一个特定的分类系统导致更好的准确性。此外,我们认识到,使用Fista方法计算稀疏PCA算法的过程总比使用近端梯度方法计算稀疏PCA算法的过程。
translated by 谷歌翻译
本文介绍了伯特嵌入法和图形卷积神经网络的新方法。采用这种组合来解决文本分类问题。最初,我们将BERT嵌入方法应用于文本(在BBC新闻数据集和IMDB电影评论数据集)中,以便将所有文本转换为数字向量。然后,图形卷积神经网络将应用于这些数字向量,以将这些文本分类为其AP的兴趣类/标签。实验表明,图形卷积神经网络模型的性能优于具有CLAS-SICE机器学习模型的BERT嵌入方法的组合的性能。
translated by 谷歌翻译
随着人类生活中的许多实际应用,包括制造监控摄像机,分析和加工客户行为,许多研究人员都注明了对数字图像的面部检测和头部姿势估计。大量提出的深度学习模型具有最先进的准确性,如YOLO,SSD,MTCNN,解决了面部检测或HOPENET的问题,FSA-NET,用于头部姿势估计问题的速度。根据许多最先进的方法,该任务的管道由两部分组成,从面部检测到头部姿势估计。这两个步骤完全独立,不共享信息。这使得模型在设置中清除但不利用每个模型中提取的大部分特色资源。在本文中,我们提出了多任务净模型,具有利用从面部检测模型提取的特征的动机,将它们与头部姿势估计分支共享以提高精度。此外,随着各种数据,表示面部的欧拉角域大,我们的模型可以预测360欧拉角域的结果。应用多任务学习方法,多任务净模型可以同时预测人头的位置和方向。为了提高预测模型的头部方向的能力,我们将人脸从欧拉角呈现到旋转矩阵的载体。
translated by 谷歌翻译
头部姿势估计是一个具有挑战性的任务,旨在解决与预测三维向量相关的问题,这为人机互动或客户行为中的许多应用程序提供服务。以前的研究提出了一些用于收集头部姿势数据的精确方法。但这些方法需要昂贵的设备,如深度摄像机或复杂的实验室环境设置。在这项研究中,我们引入了一种新的方法,以有效的成本和易于设置,以收集头部姿势图像,即UET-HEADBETS数据集,具有顶视图头姿势数据。该方法使用绝对方向传感器而不是深度摄像机快速设置,但仍然可以确保良好的效果。通过实验,我们的数据集已显示其分发和可用数据集之间的差异,如CMU Panoptic DataSet \ Cite {CMU}。除了使用UET符号数据集和其他头部姿势数据集外,我们还介绍了称为FSANET的全范围模型,这显着优于UET-HEALPETS数据集的头部姿势估计结果,尤其是在顶视图上。此外,该模型非常重量轻,占用小尺寸图像。
translated by 谷歌翻译