基于学习的控制器,例如神经网络(NN)控制器,可以表现出很高的经验性能,但缺乏正式的安全保证。为了解决此问题,已将控制屏障功能(CBF)应用于安全过滤器,以监视和修改基于学习的控制器的输出,以确保闭环系统的安全性。但是,这种修饰可能是近视的,具有不可预测的长期影响。在这项工作中,我们提出了一个安全的NN控制器,该控制器采用了基于CBF的可区分安全层,并研究了基于学习的控制中安全的NN控制器的性能。具体而言,比较了两个控制器的公式:一个是基于投影的,另一个依赖于我们提出的集合理论参数化。两种方法都证明了在数值实验中使用CBF作为单独的安全滤波器的改进的闭环性能。
translated by 谷歌翻译
分析深神经网络对输入扰动的最坏情况的性能等于解决一个大规模的非凸优化问题,过去的几项工作提出了凸出的放松作为有希望的替代方案。但是,即使对于合理的神经网络,这些放松也无法处理,因此必须在实践中被较弱的放松所取代。在这项工作中,我们提出了一种新型的操作员分裂方法,该方法可以将问题直接解决至高精度的凸松弛,从而将其拆分为经常具有分析溶液的较小的子问题。该方法是模块化的,范围为非常大的问题实例,并损害了与GPU加速的快速并行化的操作。我们展示了我们在图像分类和强化学习设置以及神经网络动力学系统的可及性分析中界定大型卷积网络最差的方法的方法。
translated by 谷歌翻译
自我监督学习(SSL)已取得了有希望的下游表现。但是,当面临现实世界应用程序中的各种资源预算时,将一一一个尺寸的多个网络预算到多个网络的巨大计算负担。在本文中,我们提出了基于歧视性SSL的可靠预处理网络(DSPNET),可以立即训练,然后缩小到各种大小的多个子网络,每个尺寸都可以忠实地学习良好的表示,并可以作为良好的初始化,以良好的初始化。具有各种资源预算的下游任务。具体而言,我们通过优雅地集成SSL和知识蒸馏,将微小网络的思想扩展到判别性SSL范式。我们在图像网上与网络与线性评估和半监督评估方案的一个单独预处理的网络表现出可比性或改进的性能,同时降低了较大的培训成本。预处理的模型还可以很好地推广到下游检测和分割任务。代码将公开。
translated by 谷歌翻译
深度学习方法需要大量的注释数据以优化参数。例如,附加具有准确边界框注释的数据集对于现代对象检测任务至关重要。但是,具有这样的像素准确性的标签是费力且耗时的,并且精心制作的标记程序对于降低人造噪声是必不可少的,涉及注释审查和接受测试。在本文中,我们关注嘈杂的位置注释对对象检测方法的性能的影响,并旨在减少噪声的不利影响。首先,当将噪声引入边界框注释中时,一阶段和两阶段检测器都会在实验上观察到明显的性能降解。例如,我们的合成噪声导致可可测试分裂的FCO探测器的性能从38.9%的AP降低到33.6%的AP,对于更快的R-CNN而言,COCO检测器的性能从38.9%的AP下降到37.8%的AP和33.7%的AP。其次,提出了一种基于贝叶斯过滤器进行预测合奏的自我纠正技术,以更好地利用教师学习范式后的嘈杂位置注释。合成和现实世界情景的实验始终证明了我们方法的有效性,例如,我们的方法将FCOS检测器的降解性能从33.6%的AP提高到可可的35.6%AP。
translated by 谷歌翻译
Recent one-stage object detectors follow a per-pixel prediction approach that predicts both the object category scores and boundary positions from every single grid location. However, the most suitable positions for inferring different targets, i.e., the object category and boundaries, are generally different. Predicting all these targets from the same grid location thus may lead to sub-optimal results. In this paper, we analyze the suitable inference positions for object category and boundaries, and propose a prediction-target-decoupled detector named PDNet to establish a more flexible detection paradigm. Our PDNet with the prediction decoupling mechanism encodes different targets separately in different locations. A learnable prediction collection module is devised with two sets of dynamic points, i.e., dynamic boundary points and semantic points, to collect and aggregate the predictions from the favorable regions for localization and classification. We adopt a two-step strategy to learn these dynamic point positions, where the prior positions are estimated for different targets first, and the network further predicts residual offsets to the positions with better perceptions of the object properties. Extensive experiments on the MS COCO benchmark demonstrate the effectiveness and efficiency of our method. With a single ResNeXt-64x4d-101-DCN as the backbone, our detector achieves 50.1 AP with single-scale testing, which outperforms the state-of-the-art methods by an appreciable margin under the same experimental settings.Moreover, our detector is highly efficient as a one-stage framework. Our code is public at https://github.com/yangli18/PDNet.
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译