深度强化学习(或仅仅是“ RL”)在工业和研究应用中广受欢迎。但是,它仍然受到一些关键限制,从而减慢了广泛的采用。它的性能对初始条件和非确定性敏感。为了释放这些挑战,我们提出了一种建立RL代理合奏的程序,以有效地建立更好的本地决策,以实现长期累积的回报。首次进行了数百个实验,以比较2个电力控制环境中的不同集合构造程序。我们发现,由4个代理商组成的合奏提高了46%的累积奖励,将重现性提高了3.6,并且可以自然有效地训练和预测GPU和CPU。
translated by 谷歌翻译
相机陷阱彻底改变了许多物种的动物研究,这些物种以前由于其栖息地或行为而几乎无法观察到。它们通常是固定在触发时拍摄短序列图像的树上的相机。深度学习有可能克服工作量以根据分类单元或空图像自动化图像分类。但是,标准的深神经网络分类器失败,因为动物通常代表了高清图像的一小部分。这就是为什么我们提出一个名为“弱对象检测”的工作流程,以更快的速度rcnn+fpn适合这一挑战。该模型受到弱监督,因为它仅需要每个图像的动物分类量标签,但不需要任何手动边界框注释。首先,它会使用来自多个帧的运动自动执行弱监督的边界框注释。然后,它使用此薄弱的监督训练更快的RCNN+FPN模型。来自巴布亚新几内亚和密苏里州生物多样性监测活动的两个数据集获得了实验结果,然后在易于重复的测试台上获得了实验结果。
translated by 谷歌翻译
深度神经网络(DNN)的集合已经实现了定性预测,但它们是计算和记忆密集型的。因此,需求越来越多,以使他们通过可用的计算资源来回答大量的请求。与最近针对单个DNN的预测推理服务器和推理框架的计划不同,我们提出了一个新的软件层,以灵活性和效率DNNS的合奏服务。我们的推理系统设计了几项技术创新。首先,我们提出了一个新的程序,以在设备(CPU或GPU)和DNN实例之间找到良好的分配矩阵。它连续运行最差的功能,可以将DNN分配到存储器设备和贪婪的算法中,以优化分配设置并加快合奏。其次,我们根据多个过程设计推理系统,以异步运行:批处理,预测和结合规则,具有有效的内部通信方案,以避免开销。实验显示了极端情况下的灵活性和效率:成功地将12个重型DNN的合奏提供到4 GPU中,而在相反的相反,一个单个DNN多线程为16 GPU。它还胜过简单的基线,该基线包括在图像分类任务上通过高达2.7倍的加速度优化DNN的批处理大小。
translated by 谷歌翻译
结合(或带有结合)的自动化机器学习试图自动构建深度神经网络(DNNS)的合奏,以实现定性的预测。众所周知,DNN的合奏避免过度合身,但它们是记忆和耗时的方法。因此,理想的汽车将在一次运行时间内产生有关准确性和推理速度的不同集合。尽管以前的AutoML专注于搜索最佳模型以最大化其概括能力,但我们宁愿提出新的Automl来构建一个较大的精确和多样化的单个模型的库,以构建合奏。首先,我们的广泛基准显示异步超频带是一种有效且可靠的方法,可以构建大量不同的模型来组合它们。然后,提出了一种基于多目标贪婪算法的新合奏选择方法,以通过控制其计算成本来生成准确的合奏。最后,我们提出了一种新型算法,以根据分配优化优化GPU群集中DNNS集合的推断。使用集合方法产生的自动素体在训练阶段和推理阶段都使用有效的GPU簇在两个数据集上显示出强大的结果。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
Artificial neural networks can learn complex, salient data features to achieve a given task. On the opposite end of the spectrum, mathematically grounded methods such as topological data analysis allow users to design analysis pipelines fully aware of data constraints and symmetries. We introduce a class of persistence-based neural network layers. Persistence-based layers allow the users to easily inject knowledge about symmetries (equivariance) respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
translated by 谷歌翻译
KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译