部署机器学习模型需要高模型质量,并且需要遵守应用程序限制。这激发了超参数优化(HPO),以调整部署约束下的模型配置。这些约束通常需要额外的计算成本来评估,而训练不合格的配置可能会浪费大量的调整成本。在这项工作中,我们提出了一种自适应约束,早期停止方法(ACE)方法将约束评估纳入HPO期间的试验修剪。为了最大程度地降低总体优化成本,ACE根据对预期评估成本的理论分析估算了成本效益的约束评估间隔。同时,我们提出了ACE中的早期停止标准,该标准在修剪中考虑了优化和约束指标,并且不需要正则化超标剂。我们的实验表明,在公平或鲁棒性约束下,ACE在分类任务的超参数调整中的出色表现。
translated by 谷歌翻译
在边缘设备上部署深层神经网络〜(DNNS)为现实世界任务提供了有效的解决方案。边缘设备已用于在不同域中有效地收集大量数据。DNN是用于数据处理和分析的有效工具。但是,由于计算资源和内存有限,在边缘设备上设计DNN是具有挑战性的。为了应对这一挑战,我们演示了最大78000 DNN加速器上边缘设备的对象检测系统。它分别与摄像头和用于图像采集和检测展览的LCD显示器集成了启动DNN的推断。床是一种简洁,有效且详细的解决方案,包括模型培训,量化,合成和部署。实验结果表明,床可以通过300 kb微小的DNN模型产生准确的检测,该模型仅需91.9 ms的推理时间和1.845 MJ的能量。
translated by 谷歌翻译
动作识别是通过广泛应用程序进行视频理解的重要任务。但是,开发有效的动作识别解决方案通常需要进行广泛的工程工作,以构建和测试模块及其超参数的不同组合。在此演示中,我们提出了Autovideo,这是一种用于自动视频动作识别的Python系统。Autovideo的特征是1)标准管道语言之后的高度模块化和可扩展的基础架构,2)管道构造的原始列表,3)数据驱动的调谐器来保存管道调整的努力,4)易于使用图形用户界面(GUI)。Autovideo在MIT许可证上发行,网址为https://github.com/datamllab/autovideo
translated by 谷歌翻译
随机上下文的匪徒问题,建造了勘探和开发之间的权衡取舍,具有许多真实的应用,包括推荐系统,在线广告和临床试验。与许多其他机器学习算法一样,上下文匪徒算法通常具有一个或多个超参数。例如,在大多数最佳的随机上下文匪徒算法中,有一个未知的探索参数可以控制勘探和开发之间的权衡。适当的超参数选择对于上下文的匪徒算法表现良好至关重要。但是,由于没有预采用的数据集,因此必须使用离线调谐方法在上下文匪徒环境中选择超参数,并且必须实时做出决策。为了解决这个问题,我们首先提出了一个两层匪徒结构,用于自动调整勘探参数并将其进一步推广到联合匪徒框架,该框架可以在上下文的匪徒环境中动态学习多个超参数。我们得出了我们提议的联合匪徒框架的遗憾界限,并表明它可以避免对要调整的超参数的数量成倍依赖。此外,它在某些情况下达到了最佳的遗憾界限。联合匪徒框架足够通用,可以在许多流行的上下文匪徒算法(例如Linucb,Lints,UCB-GLM等)中处理调整任务。在合成数据集和真实数据集上进行了实验,以验证我们提出的框架的有效性。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译