模拟到现实的转移已成为一种流行且非常成功的方法,用于培训各种任务的机器人控制政策。但是,确定在模拟中训练的政策何时准备将其转移到物理世界通常是一个挑战。部署经过很少的模拟数据训练的策略可能会导致物理硬件的不可靠和危险行为。另一方面,模拟中的过度训练会导致策略过度拟合模拟器的视觉外观和动力学。在这项工作中,我们研究了自动确定在模拟中训练的策略何时可以可靠地转移到物理机器人的策略。我们在机器人织物操纵的背景下专门研究了这些思想,因为成功建模织物的动力学和视觉外观的困难,成功的SIM2Real转移尤其具有挑战性。导致织物平滑任务表明我们的切换标准与实际的性能很好地相关。特别是,我们基于信心的切换标准在培训总预算的55-60%之内达到了87.2-93.7%的平均最终面料覆盖率。有关代码和补充材料,请参见https://tinyurl.com/lsc-case。
translated by 谷歌翻译
安全探索对于使用风险敏感环境中的强化学习(RL)至关重要。最近的工作了解衡量违反限制概率的风险措施,然后可以使用安全性来实现安全性。然而,学习这种风险措施需要与环境的重大互动,从而在学习期间违反违规程度过多。此外,这些措施不易转移到新环境。我们将安全探索作为离线Meta RL问题,目的是利用一系列环境中的安全和不安全行为的例子,以快速将学习风险措施与以前看不见的动态的新环境。然后,我们向安全适应(MESA)提出元学习,这是一个荟萃学习安全RL的风险措施的方法。跨5个连续控制域的仿真实验表明,MESA可以从一系列不同的环境中利用脱机数据,以减少未经调整环境中的约束违规,同时保持任务性能。有关代码和补充材料,请参阅https://tinyurl.com/safe-meta-rl。
translated by 谷歌翻译
以前的工作定义了探索性抓握,其中一个机器人迭代地抓住并丢弃一个未知的复杂多面体物体,以发现一组稳定的掌握对象的每个识别的不同稳定的姿势。最近的工作用来了一个多武装强盗模型,每种姿势一小组候选麦克风;但是,对于具有少数成功Grasps的物体,该组可能不包括最强大的掌握。我们展示了学习高效的掌握装置(腿),这是一种算法,可以通过构建大型有希望的掌握的小型活跃的掌握,并使用学习的信心范围来确定何时何时置信,它可以停止探索对象。实验表明,腿可以比不学习活动集的现有算法更有效地识别高质量的掌握。在仿真实验中,我们测量腿部和基线所识别的最佳掌握的成功概率与真正最强大的掌握的最佳差距。经过3000个探索步骤后,腿部优于14个Dex-Net对手的10个中的基线算法和39 egad的25个!对象。然后,我们开发一个自我监督的掌握系统,机器人探讨了人类干预最小的掌握。 3对象的物理实验表明,腿将从基线收敛到高性能的GRASPS比基线更快。有关补充材料和视频,请参阅\ url {https://sites.google.com/view/legs-exp-grasping}。
translated by 谷歌翻译
In this paper, we propose and showcase, for the first time, monocular multi-view layout estimation for warehouse racks and shelves. Unlike typical layout estimation methods, MVRackLay estimates multi-layered layouts, wherein each layer corresponds to the layout of a shelf within a rack. Given a sequence of images of a warehouse scene, a dual-headed Convolutional-LSTM architecture outputs segmented racks, the front and the top view layout of each shelf within a rack. With minimal effort, such an output is transformed into a 3D rendering of all racks, shelves and objects on the shelves, giving an accurate 3D depiction of the entire warehouse scene in terms of racks, shelves and the number of objects on each shelf. MVRackLay generalizes to a diverse set of warehouse scenes with varying number of objects on each shelf, number of shelves and in the presence of other such racks in the background. Further, MVRackLay shows superior performance vis-a-vis its single view counterpart, RackLay, in layout accuracy, quantized in terms of the mean IoU and mAP metrics. We also showcase a multi-view stitching of the 3D layouts resulting in a representation of the warehouse scene with respect to a global reference frame akin to a rendering of the scene from a SLAM pipeline. To the best of our knowledge, this is the first such work to portray a 3D rendering of a warehouse scene in terms of its semantic components - Racks, Shelves and Objects - all from a single monocular camera.
translated by 谷歌翻译
Fine-tuning pre-trained language models (PLMs) achieves impressive performance on a range of downstream tasks, and their sizes have consequently been getting bigger. Since a different copy of the model is required for each task, this paradigm is infeasible for storage-constrained edge devices like mobile phones. In this paper, we propose SPARTAN, a parameter efficient (PE) and computationally fast architecture for edge devices that adds hierarchically organized sparse memory after each Transformer layer. SPARTAN freezes the PLM parameters and fine-tunes only its memory, thus significantly reducing storage costs by re-using the PLM backbone for different tasks. SPARTAN contains two levels of memory, with only a sparse subset of parents being chosen in the first level for each input, and children cells corresponding to those parents being used to compute an output representation. This sparsity combined with other architecture optimizations improves SPARTAN's throughput by over 90% during inference on a Raspberry Pi 4 when compared to PE baselines (adapters) while also outperforming the latter by 0.1 points on the GLUE benchmark. Further, it can be trained 34% faster in a few-shot setting, while performing within 0.9 points of adapters. Qualitative analysis shows that different parent cells in SPARTAN specialize in different topics, thus dividing responsibility efficiently.
translated by 谷歌翻译
We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback.
translated by 谷歌翻译
在回答问题时,人类会利用跨不同模式可用的信息来综合一致,完整的思想链(COT)。在深度学习模型(例如大规模语言模型)的情况下,这个过程通常是黑匣子。最近,科学问题基准已用于诊断AI系统的多跳推理能力和解释性。但是,现有数据集无法为答案提供注释,或仅限于仅文本模式,小尺度和有限的域多样性。为此,我们介绍了科学问题答案(SQA),这是一个新的基准,由〜21k的多模式多种选择问题组成,其中包含各种科学主题和答案的注释,并提供相应的讲座和解释。我们进一步设计语言模型,以学习将讲座和解释作为思想链(COT),以模仿回答SQA问题时的多跳上推理过程。 SQA在语言模型中展示了COT的实用性,因为COT将问题的答案绩效提高了1.20%的GPT-3和3.99%的unifiedqa。我们还探索了模型的上限,以通过喂食输入中的那些来利用解释;我们观察到它将GPT-3的少量性能提高了18.96%。我们的分析进一步表明,与人类类似的语言模型受益于解释,从较少的数据中学习并仅使用40%的数据实现相同的性能。
translated by 谷歌翻译
类比推理问题挑战了连接主义者和符号AI系统,因为这些系统需要将背景知识,推理和模式识别的结合。符号系统摄入显式域知识并执行演绎推理,但它们对噪声敏感,并且需要输入以预设符号特征。另一方面,Connectionist系统可以直接摄入丰富的输入空间,例如图像,文本或语音,即使使用嘈杂的输入也可以识别模式。但是,Connectionist模型努力将明确的领域知识用于演绎推理。在本文中,我们提出了一个框架,将神经网络的模式识别能力与象征性推理和背景知识结合在一起,以解决一类类似推理问题,其中一组属性和可能的​​关系是已知的。我们从“神经算法推理”方法[DeepMind 2020]中汲取灵感,并通过(i)基于问题的象征模型学习分布式表示(ii)培训神经网络转化反映了关系的分布式表示形式。参与问题,最后(iii)培训神经网络编码器,从图像到(i)中的分布式表示。这三个要素使我们能够使用神经网络作为操纵分布式表示的基本功能执行基于搜索的推理。我们在乌鸦渐进式矩阵中的视觉类比问题上进行了测试,并在人类绩效中实现准确性竞争,在某些情况下,优于初始端到端神经网络方法的方法。尽管最近接受大规模训练的神经模型产生了SOTA,但我们的新型神经符号推理方法是该问题的有希望的方向,可以说是更笼统的,尤其是对于可用的域知识的问题。
translated by 谷歌翻译
策略分解(PODEC)是一个框架,在将政策推导到最佳控制问题时,可以减少维度的诅咒。对于给定的系统表示形式,即描述系统的状态变量和控制输入,PODEC生成了分解所有控制输入的策略的策略。因此,不同输入的策略以脱钩或级联的方式得出,作为状态变量某些子集的函数,导致计算减少。但是,系统表示的选择至关重要,因为它决定了由此产生的策略的次优性。我们提出了一种启发式方法,可以找到更适合分解的表示形式。我们的方法是基于这样的观察结果,即每个分解都以最佳成本为代价,并且已经导致稀疏最佳策略的表示形式在产生的政策中实现了稀疏模式,这可能会产生较低的次级优势的分解。由于尚不清楚最佳策略,我们构建了一个剥夺其LQR近似值的系统表示。对于简化的双头,4个自由度的操纵器和四轮驱动器,我们发现分解物比Vanilla Podec确定的轨迹成本降低了10%。此外,与最先进的强化学习算法获得的政策相比,分解政策产生的轨迹的成本大大降低。
translated by 谷歌翻译
最近的波能转化器(WEC)配备了多个腿和发电机,以最大程度地发电。传统控制器显示出捕获复杂波形模式的局限性,并且控制器必须有效地最大化能量捕获。本文介绍了多项式增强学习控制器(MARL),该控制器的表现优于传统使用的弹簧减震器控制器。我们的最初研究表明,问题的复杂性质使训练很难融合。因此,我们提出了一种新颖的跳过训练方法,使MARL训练能够克服性能饱和,并与默认的MARL训练相比,融合到最佳控制器,从而增强发电。我们还提出了另一种新型的混合训练初始化(STHTI)方法,其中最初可以单独针对基线弹簧减震器(SD)控制器对MARL控制器的个别代理进行训练,然后在将来一次或将来培训一个代理商或全部培训加速收敛。我们使用异步参与者-Critic(A3C)算法在基线弹簧减震器控制器上实现了基线弹簧减震器控制器的能源效率的两位数提高。
translated by 谷歌翻译