The purpose of this study is to introduce new design-criteria for next-generation hyperparameter optimization software. The criteria we propose include (1) define-by-run API that allows users to construct the parameter search space dynamically, (2) efficient implementation of both searching and pruning strategies, and (3) easy-to-setup, versatile architecture that can be deployed for various purposes, ranging from scalable distributed computing to light-weight experiment conducted via interactive interface. In order to prove our point, we will introduce Optuna, an optimization software which is a culmination of our effort in the development of a next generation optimization software. As an optimization software designed with define-by-run principle, Optuna is particularly the first of its kind. We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications. Our software is available under the MIT license (https://github.com/pfnet/optuna/).
translated by 谷歌翻译
Supervised learning methods have been suffering from the fact that a large-scale labeled dataset is mandatory, which is difficult to obtain. This has been a more significant issue for fashion compatibility prediction because compatibility aims to capture people's perception of aesthetics, which are sparse and changing. Thus, the labeled dataset may become outdated quickly due to fast fashion. Moreover, labeling the dataset always needs some expert knowledge; at least they should have a good sense of aesthetics. However, there are limited self/semi-supervised learning techniques in this field. In this paper, we propose a general color distortion prediction task forcing the baseline to recognize low-level image information to learn more discriminative representation for fashion compatibility prediction. Specifically, we first propose to distort the image by adjusting the image color balance, contrast, sharpness, and brightness. Then, we propose adding Gaussian noise to the distorted image before passing them to the convolutional neural network (CNN) backbone to learn a probability distribution over all possible distortions. The proposed pretext task is adopted in the state-of-the-art methods in fashion compatibility and shows its effectiveness in improving these methods' ability in extracting better feature representations. Applying the proposed pretext task to the baseline can consistently outperform the original baseline.
translated by 谷歌翻译
尽管卷积神经网络(CNN)在图像识别方面具有很高的精度,但它们容易受到对抗性示例和分布数据的影响,并且已经指出了人类识别的差异。为了提高针对分布数据的鲁棒性,我们提出了一种基于频率的数据增强技术,该技术将频率组件用同一类的其他图像替换。当培训数据为CIFAR10并且分发数据的数据为SVHN时,使用该方法训练的模型的接收器操作特征(AUROC)曲线从89.22 \%\%增加到98.15 \%,并进一步增加到98.59\%与另一种数据增强方法结合使用。此外,我们在实验上证明了分布外数据的可靠模型使用图像的许多高频组件。
translated by 谷歌翻译
地面运动预测方程通常用于预测地震强度分布。但是,将这种方法应用于受地下板结构影响的地震分布并不容易,这通常称为异常地震分布。这项研究提出了使用神经网络进行回归和分类方法的混合体。提出的模型将分布视为二维数据,如图像。我们的方法可以准确预测地震强度分布,甚至异常分布。
translated by 谷歌翻译
本文为时尚兼容性预测提供了自适应培训(SAT)模型。它着重于学习一些硬件,例如具有相似颜色,纹理和图案功能的项目,但由于美学或时间变化而被认为是不兼容的。具体而言,我们首先设计了一种定义硬服装的方法,并根据建议为其推荐项目的难度定义并分配了难度分数(DS)(DS)。然后,我们提出了一个自适应三胞胎损失(SATL),其中考虑了服装的DS。最后,我们提出了一个非常简单的条件相似性网络,将提出的SATL结合在一起,以在时尚兼容性预测中学习硬件。公开可用的多货车和多面装备D数据集的实验证明了我们SAT在时尚兼容性预测中的有效性。此外,我们的SATL可以很容易地扩展到其他条件相似性网络以提高其性能。
translated by 谷歌翻译
对抗性攻击只着眼于改变分类器的预测,但是它们的危险在很大程度上取决于班级的错误方式。例如,当自动驾驶系统将波斯猫误认为是暹罗猫时,这几乎不是问题。但是,如果它以120公里/小时的最低速度标志误认为猫,可能会出现严重的问题。作为对更有威胁性的对抗性攻击的垫脚石,我们考虑了超级阶级的对抗性攻击,这不仅会导致不仅级别的班级,而且会导致超类。我们在准确性,速度和稳定性方面对超级类对抗攻击(现有和19种新方法)进行了首次全面分析,并确定了几种实现更好性能的策略。尽管这项研究旨在超类错误分类,但这些发现可以应用于涉及多个类别的其他问题设置,例如TOP-K和多标签分类攻击。
translated by 谷歌翻译
图形神经网络(GNNS)是将图形数据作为输入的深度学习模型,它们应用于各种任务,例如交通预测和分子特性预测。然而,由于GNN的复杂性,难以分析输入的哪些部分影响GNN模型的输出。在本研究中,我们扩展了卷积神经网络(CNNS)的解释方法,例如局部可解释模型 - 不可止结的解释(石灰),基于梯度的显着性图和梯度加权类激活映射(Grad-Cam)到GNN,以及预测输入图中的哪些边对于GNN决策很重要。实验结果表明,基于石灰的方法是最有效的解释性方法,用于多个任务中的现实情况,甚至在GNN解释性中表现出最先进的方法。
translated by 谷歌翻译
Deep neural networks (DNNs) trained on large-scale datasets have exhibited significant performance in image classification. Many large-scale datasets are collected from websites, however they tend to contain inaccurate labels that are termed as noisy labels. Training on such noisy labeled datasets causes performance degradation because DNNs easily overfit to noisy labels. To overcome this problem, we propose a joint optimization framework of learning DNN parameters and estimating true labels. Our framework can correct labels during training by alternating update of network parameters and labels. We conduct experiments on the noisy CIFAR-10 datasets and the Clothing1M dataset.The results indicate that our approach significantly outperforms other state-of-the-art methods.
translated by 谷歌翻译