Adversarial machine learning has been both a major concern and a hot topic recently, especially with the ubiquitous use of deep neural networks in the current landscape. Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time. On one hand, the goal is to develop strong and robust deep networks that are resistant to malicious actors. On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models. Most of existing attacks employs a single $\ell_p$ distance (commonly, $p\in\{1,2,\infty\}$) to define the concept of closeness and performs steepest gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial example in the same way. These $\ell_p$ attacks each has its own pros and cons; and there is no single attack that can successfully break through defense models that are robust against multiple $\ell_p$ norms simultaneously. Motivated by these observations, we come up with a natural approach: combining various $\ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation. Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples. Finally, through various experiments with standardized benchmarks, we show that our method outperforms most current strong attacks across state-of-the-art defense mechanisms, while retaining its ability to remain clean visually.
translated by 谷歌翻译
Pareto Front Learning (PFL) was recently introduced as an effective approach to obtain a mapping function from a given trade-off vector to a solution on the Pareto front, which solves the multi-objective optimization (MOO) problem. Due to the inherent trade-off between conflicting objectives, PFL offers a flexible approach in many scenarios in which the decision makers can not specify the preference of one Pareto solution over another, and must switch between them depending on the situation. However, existing PFL methods ignore the relationship between the solutions during the optimization process, which hinders the quality of the obtained front. To overcome this issue, we propose a novel PFL framework namely \ourmodel, which employs a hypernetwork to generate multiple solutions from a set of diverse trade-off preferences and enhance the quality of the Pareto front by maximizing the Hypervolume indicator defined by these solutions. The experimental results on several MOO machine learning tasks show that the proposed framework significantly outperforms the baselines in producing the trade-off Pareto front.
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
Diffusion models are rising as a powerful solution for high-fidelity image generation, which exceeds GANs in quality in many circumstances. However, their slow training and inference speed is a huge bottleneck, blocking them from being used in real-time applications. A recent DiffusionGAN method significantly decreases the models' running time by reducing the number of sampling steps from thousands to several, but their speeds still largely lag behind the GAN counterparts. This paper aims to reduce the speed gap by proposing a novel wavelet-based diffusion structure. We extract low-and-high frequency components from both image and feature levels via wavelet decomposition and adaptively handle these components for faster processing while maintaining good generation quality. Furthermore, we propose to use a reconstruction term, which effectively boosts the model training convergence. Experimental results on CelebA-HQ, CIFAR-10, LSUN-Church, and STL-10 datasets prove our solution is a stepping-stone to offering real-time and high-fidelity diffusion models. Our code and pre-trained checkpoints will be available at \url{https://github.com/VinAIResearch/WaveDiff.git}.
translated by 谷歌翻译
尽管在过去的几年中取得了重大进展,但歧义仍然是面部表情识别(FER)的关键挑战。它可能导致嘈杂和不一致的注释,这阻碍了现实世界中深度学习模型的性能。在本文中,我们提出了一种新的不确定性标签分布学习方法,以提高深层模型的鲁棒性,以防止不确定性和歧义。我们利用价值空间中的邻里信息来适应培训训练样本的情绪分布。我们还考虑提供的标签将其纳入标签分布时的不确定性。我们的方法可以轻松地集成到深层网络中,以获得更多的培训监督并提高识别准确性。在各种嘈杂和模棱两可的环境下,在几个数据集上进行了密集的实验表明,我们的方法取得了竞争成果,并且超出了最新的最新方法。我们的代码和模型可在https://github.com/minhnhatvt/label-distribution-learning-fer-tf上找到。
translated by 谷歌翻译
物理受限的机器学习正在成为物理机器学习领域的重要主题。将物理限制纳入机器学习方法的最重要的优势之一是,由此产生的模型需要较少的数据训练。通过将物理规则纳入机器学习配方本身,预计预测将在物理上合理。高斯流程(GP)可能是小型数据集的机器学习中最常见的方法之一。在本文中,我们研究了在三个不同的材料数据集上限制具有单调性的GP公式的可能性,其中使用了一个实验和两个计算数据集。比较单调的GP与常规GP进行比较,该GP观察到后方差的显着降低。单调的GP在插值方面严格单调性,但是在外推方案中,随着训练数据集超越训练数据集,单调效应开始消失。与常规GP相比,GP对GP的单调性施加的精度为较小。单调的GP可能在数据稀缺和嘈杂的应用中最有用,并且由强有力的物理证据支持单调性。
translated by 谷歌翻译
跨核心联合学习利用了几百个可靠的数据筒仓,并具有高速访问链接,共同训练模型。尽管这种方法成为联合学习中的流行环境,但设计出强大的拓扑以减少训练时间仍然是一个开放的问题。在本文中,我们提出了一种用于跨核心联合学习的新的多编码拓扑。我们首先使用覆盖图构造多式图。然后,我们将此多数分析为具有孤立节点的不同简单图。隔离节点的存在使我们能够执行模型聚合而无需等待其他节点,从而减少训练时间。我们进一步提出了一种新的分布式学习算法,以与我们的多编码拓扑一起使用。公共数据集的密集实验表明,与最近的最新拓扑相比,我们提出的方法大大减少了训练时间,同时确保收敛并保持模型的准确性。
translated by 谷歌翻译
当前的3D分割方法很大程度上依赖于大规模的点状数据集,众所周知,这些数据集众所周知。很少有尝试规避需要每点注释的需求。在这项工作中,我们研究了弱监督的3D语义实例分割。关键的想法是利用3D边界框标签,更容易,更快地注释。确实,我们表明只有仅使用边界框标签训练密集的分割模型。在我们方法的核心上,\ name {}是一个深层模型,灵感来自经典的霍夫投票,直接投票赞成边界框参数,并且是专门针对边界盒票的专门定制的群集方法。这超出了常用的中心票,这不会完全利用边界框注释。在扫描仪测试中,我们弱监督的模型在其他弱监督的方法中获得了领先的性能(+18 MAP@50)。值得注意的是,它还达到了当前完全监督模型的50分数的地图的97%。为了进一步说明我们的工作的实用性,我们在最近发布的Arkitscenes数据集中训练Box2mask,该数据集仅使用3D边界框注释,并首次显示引人注目的3D实例细分掩码。
translated by 谷歌翻译
通常承认,巨额(培训)数据的可用性是人工智能(AI)最近进步的最重要因素之一。但是,数据集通常用于狭窄的AI子区域中的特定任务,并且没有统一的方式来管理和访问它们。这不仅在培训或部署机器学习模型时创造了不必要的开销,但也限制了对数据的理解,这对于以数据为中心的AI非常重要。在本文中,我们向不同数据集的统一框架展示了我们的愿景,以便可以轻松地集成和查询,例如,使用标准查询语言。我们在持续的工作中展示了这一点,为计算机愿景中的数据集创建了一个框架,并在不同的场景中显示了它的优势。我们的演示可在https://vision.semkg.org中获得。
translated by 谷歌翻译
在过去的几十年中,由于其在广泛的应用中,现场文本认可从学术界和实际用户获得了全世界的关注。尽管在光学字符识别方面取得了成就,但由于诸如扭曲或不规则布局等固有问题,现场文本识别仍然具有挑战性。大多数现有方法主要利用基于复发或卷积的神经网络。然而,虽然经常性的神经网络(RNN)通常由于顺序计算而遭受慢的训练速度,并且遇到消失的梯度或瓶颈,但CNN在复杂性和性能之间衡量折衷。在本文中,我们介绍了SAFL,一种基于自我关注的神经网络模型,具有场景文本识别的焦点损失,克服现有方法的限制。使用焦损而不是负值对数似然有助于模型更多地关注低频样本训练。此外,为应对扭曲和不规则文本,我们在传递到识别网络之前,我们利用空间变换(STN)来纠正文本。我们执行实验以比较拟议模型的性能与七个基准。数值结果表明,我们的模型实现了最佳性能。
translated by 谷歌翻译