Evaluating neural network performance is critical to deep neural network design but a costly procedure. Neural predictors provide an efficient solution by treating architectures as samples and learning to estimate their performance on a given task. However, existing predictors are task-dependent, predominantly estimating neural network performance on image classification benchmarks. They are also search-space dependent; each predictor is designed to make predictions for a specific architecture search space with predefined topologies and set of operations. In this paper, we propose a novel All-in-One Predictor (AIO-P), which aims to pretrain neural predictors on architecture examples from multiple, separate computer vision (CV) task domains and multiple architecture spaces, and then transfer to unseen downstream CV tasks or neural architectures. We describe our proposed techniques for general graph representation, efficient predictor pretraining and knowledge infusion techniques, as well as methods to transfer to downstream tasks/spaces. Extensive experimental results show that AIO-P can achieve Mean Absolute Error (MAE) and Spearman's Rank Correlation (SRCC) below 1% and above 0.5, respectively, on a breadth of target downstream CV tasks with or without fine-tuning, outperforming a number of baselines. Moreover, AIO-P can directly transfer to new architectures not seen during training, accurately rank them and serve as an effective performance estimator when paired with an algorithm designed to preserve performance while reducing FLOPs.
translated by 谷歌翻译
Predicting neural architecture performance is a challenging task and is crucial to neural architecture design and search. Existing approaches either rely on neural performance predictors which are limited to modeling architectures in a predefined design space involving specific sets of operators and connection rules, and cannot generalize to unseen architectures, or resort to zero-cost proxies which are not always accurate. In this paper, we propose GENNAPE, a Generalized Neural Architecture Performance Estimator, which is pretrained on open neural architecture benchmarks, and aims to generalize to completely unseen architectures through combined innovations in network representation, contrastive pretraining, and fuzzy clustering-based predictor ensemble. Specifically, GENNAPE represents a given neural network as a Computation Graph (CG) of atomic operations which can model an arbitrary architecture. It first learns a graph encoder via Contrastive Learning to encourage network separation by topological features, and then trains multiple predictor heads, which are soft-aggregated according to the fuzzy membership of a neural network. Experiments show that GENNAPE pretrained on NAS-Bench-101 can achieve superior transferability to 5 different public neural network benchmarks, including NAS-Bench-201, NAS-Bench-301, MobileNet and ResNet families under no or minimum fine-tuning. We further introduce 3 challenging newly labelled neural network benchmarks: HiAML, Inception and Two-Path, which can concentrate in narrow accuracy ranges. Extensive experiments show that GENNAPE can correctly discern high-performance architectures in these families. Finally, when paired with a search algorithm, GENNAPE can find architectures that improve accuracy while reducing FLOPs on three families.
translated by 谷歌翻译
学习优化是一个快速增长的领域,旨在使用机器学习(ML)来解决优化问题或改善现有的优化算法。特别是,图形神经网络(GNN)被认为是用于优化问题的合适ML模型,其变量和约束是置换的 - 例如线性程序(LP)。尽管文献报道了令人鼓舞的数值结果,但本文确定了将GNN应用于解决LP的理论基础。给定LPS的任何尺寸限制,我们构造了一个GNN,该GNN将不同的LP映射到不同的输出。我们表明,正确构建的GNN可以可靠地预测广泛类别中每个LP的可行性,界限和最佳解决方案。我们的证明是基于最近发现的Weisfeiler-Lehman同构测试与GNN之间的联系。为了验证我们的结果,我们培训了一个简单的GNN,并提出了将LP映射到其可行性和解决方案中的准确性。
translated by 谷歌翻译
亚组发现是一种描述性和探索性数据挖掘技术,可识别人群中有关感兴趣变量表现出有趣行为的亚组。亚组发现在知识发现和假设生成中有许多应用程序,但对于非结构化的高维数据(例如图像)仍然不适用。这是因为子组发现算法依赖于基于(属性,值)对定义描述性规则,但是,在非结构化数据中,属性并不是很好的定义。即使在数据中存在属性的概念(例如图像中的像素),由于数据的高维度,这些属性也不足够丰富,无法在规则中使用。在本文中,我们介绍了亚组感知的变异自动编码器,这是一种新型的变分自动编码器,它学习了非结构化数据的表示,从而导致具有较高质量的亚组。我们的实验结果证明了该方法在以高质量学习亚组的同时支持概念的解释性的有效性。
translated by 谷歌翻译
收集足够标记的数据以建立人类活动识别(HAR)模型是昂贵且耗时的。对现有数据的培训通常会使模型偏向于培训数据的分布,因此该模型可能会在具有不同分布的测试数据上执行。尽管现有的转移学习和域适应性的努力试图解决上述问题,但他们仍然需要访问目标域上的未标记数据,这在实际情况下可能是不可能的。很少有作品注意训练一个模型,该模型可以很好地概括为HAR看不见的目标域。在本文中,我们提出了一种新的方法,称为可推广跨域HAR的语义歧视混合(SDMIX)。首先,我们介绍了语义感知的混音,该混音考虑了活动语义范围,以克服域差异带来的语义不一致。其次,我们引入了较大的利润损失,以增强混合歧视,以防止虚拟标签带来的错误分类。在五个公共数据集上进行的综合概括实验表明,我们的SDMIX基本上优于最先进的方法,其平均准确度提高了跨人员,交叉数据库和交叉位置HAR的平均准确性6%。
translated by 谷歌翻译
尽管最近的人工智能和机器学习进展,但许多最先进的方法缺乏解释性和透明度。解释机器学习模型的预测能力和准确评估这些模型的能力是至关重要的。在本文中,我们提出了一种互动可视化工具来阐明主动学习的培训过程。该工具使一个人能够选择有趣的数据点的样本,查看他们的预测值如何在不同的查询阶段改变,从而更好地了解活动学习工作的时间和程度。此外,用户可以利用此工具同时比较不同的主动学习策略,并检查为什么某些策略在某些情况下表达他人。通过一些初步实验,我们证明了我们的可视化面板在各种主动学习实验中使用了很大的潜力,并帮助用户适当地评估其模型。
translated by 谷歌翻译
基于知识的视觉问题的问题涉及除了图像内容之外还涉及需要外部知识的问题。这些知识通常有各种形式,包括视觉,文本和致辞知识。使用更多知识来源,增加了检索更无关紧要或嘈杂的事实的可能性,使其充实并找到答案的挑战。为了解决这一挑战,我们使用外部知识(MAVEX)提出了多模态答案验证,其中该想法是根据答案特定知识检索验证一组有希望的答案候选者。而不是在大多数现有方法中搜索大量不相关的事实中的答案,Mavex旨在学习如何从嘈杂来源中提取相关知识,这是对每个答复候选者的信任,以及如何使用候选者那个来源。除了以维基百科句子和概念概念的形式之外,我们的多模态设置是第一个利用外部视觉知识(使用谷歌搜索的图像)。我们的实验与OK-VQA是一个具有挑战性的知识VQA数据集,证明了MAVEX实现了新的最先进的结果。我们的代码可在https://github.com/jialinwu17/mavex提供
translated by 谷歌翻译
How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost the learning of state encoding, recent works are focused on capturing behavioral similarities between state representations or applying data augmentation on visual observations. In this paper, we propose a novel meta-learner-based framework for representation learning regarding behavioral similarities for reinforcement learning. Specifically, our framework encodes the high-dimensional observations into two decomposed embeddings regarding reward and dynamics in a Markov Decision Process (MDP). A pair of meta-learners are developed, one of which quantifies the reward similarity and the other quantifies dynamics similarity over the correspondingly decomposed embeddings. The meta-learners are self-learned to update the state embeddings by approximating two disjoint terms in on-policy bisimulation metric. To incorporate the reward and dynamics terms, we further develop a strategy to adaptively balance their impacts based on different tasks or environments. We empirically demonstrate that our proposed framework outperforms state-of-the-art baselines on several benchmarks, including conventional DM Control Suite, Distracting DM Control Suite and a self-driving task CARLA.
translated by 谷歌翻译
Detecting personal health mentions on social media is essential to complement existing health surveillance systems. However, annotating data for detecting health mentions at a large scale is a challenging task. This research employs a multitask learning framework to leverage available annotated data from a related task to improve the performance on the main task to detect personal health experiences mentioned in social media texts. Specifically, we focus on incorporating emotional information into our target task by using emotion detection as an auxiliary task. Our approach significantly improves a wide range of personal health mention detection tasks compared to a strong state-of-the-art baseline.
translated by 谷歌翻译
The health mention classification (HMC) task is the process of identifying and classifying mentions of health-related concepts in text. This can be useful for identifying and tracking the spread of diseases through social media posts. However, this is a non-trivial task. Here we build on recent studies suggesting that using emotional information may improve upon this task. Our study results in a framework for health mention classification that incorporates affective features. We present two methods, an intermediate task fine-tuning approach (implicit) and a multi-feature fusion approach (explicit) to incorporate emotions into our target task of HMC. We evaluated our approach on 5 HMC-related datasets from different social media platforms including three from Twitter, one from Reddit and another from a combination of social media sources. Extensive experiments demonstrate that our approach results in statistically significant performance gains on HMC tasks. By using the multi-feature fusion approach, we achieve at least a 3% improvement in F1 score over BERT baselines across all datasets. We also show that considering only negative emotions does not significantly affect performance on the HMC task. Additionally, our results indicate that HMC models infused with emotional knowledge are an effective alternative, especially when other HMC datasets are unavailable for domain-specific fine-tuning. The source code for our models is freely available at https://github.com/tahirlanre/Emotion_PHM.
translated by 谷歌翻译