使用逆动力学的最佳控制(OC)提供了数值益处,例如粗略优化,更便宜的衍生物计算和高收敛速率。但是,为了利用腿部机器人的模型预测控制(MPC)中的这些好处,有效处理其大量平等约束至关重要。为此,我们首先(i)提出了一种新的方法来处理基于NullSpace参数化的平等约束。我们的方法可以适当地平衡最优性,以及动态和平等构成可行性,从而增加了吸引到良好本地最小值的盆地。为此,我们(ii)(ii)通过合并功能功能来调整以可行性为导向的搜索。此外,我们介绍了(iii)的(iii)对考虑任意执行器模型的反向动力学的凝结公式。我们还基于感知运动框架中基于反向动力学的新型MPC(iv)。最后,我们提出(v)最佳控制与正向动力学和逆动力学的理论比较,并通过数值评估。我们的方法使逆动力学MPC在硬件上首次应用,从而在Anymal机器人上进行了最新的动态攀登。我们在广泛的机器人问题上进行基准测试,并产生敏捷和复杂的动作。我们显示了我们的无空间分辨率和凝结配方的计算降低(高达47.3%)。我们通过以高收敛速率解决粗略优化问题(最多10 Hz离散化)来提供方法的益处。我们的算法在Crocoddyl内公开可用。
translated by 谷歌翻译
机器人设计优化,模仿学习和系统标识共享一个常见的问题,该问题需要对机器人或任务参数进行优化,同时在优化机器人运动的同时。为了解决这些问题,我们可以使用可区分的最佳控制,以使机器人运动相对于参数的运动的梯度。我们提出了一种通过敏感性分析(SA)通过差分动态编程(DDP)算法进行分析分析计算这些梯度的方法。我们表明,计算梯度时必须包括二阶动力学项。但是,在计算运动时,我们不需要包括它们。我们验证我们在摆和双摆系统上的方法。此外,我们比较使用使用迭代线性二次调节器(ILQR)的衍生物,该线性二次调节器(ILQR)在Kinova ARM的共同设计任务上忽略了这些二阶术语,我们在其中优化了目标机器人的链路长度达到任务。我们表明,使用ILQR梯度忽略二阶动力学的优化会影响衍生物的计算。取而代之的是,使用DDP梯度优化,对于一系列初始设计,使我们的公式扩展到复杂的系统。
translated by 谷歌翻译
我们为腿部机器人和动态操作的计算共同设计提供了一个多功能框架。当前的最新方法通常基于随机采样或并发优化。我们提出了一种新型的二元优化方法,该方法利用运动计划子问题的衍生物(即较低级别)。这些运动规划衍生物使我们能够将任意的设计约束和成本纳入通用非线性计划(即上层)。我们的方法允许在较低级别使用任何可区分的运动计划者,还允许捕获任意设计约束和成本的高层。它有效地优化了机器人的形态,有效载荷分布和执行参数,同时考虑其完整的动态,关节限制和物理约束,例如摩擦锥。我们通过设计跳跃和小跑的四倍机器人来演示这些功能。我们证明我们的方法能够为这些任务设计一个更节能的独奏机器人。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We consider the contextual bandit problem on general action and context spaces, where the learner's rewards depend on their selected actions and an observable context. This generalizes the standard multi-armed bandit to the case where side information is available, e.g., patients' records or customers' history, which allows for personalized treatment. We focus on consistency -- vanishing regret compared to the optimal policy -- and show that for large classes of non-i.i.d. contexts, consistency can be achieved regardless of the time-invariant reward mechanism, a property known as universal consistency. Precisely, we first give necessary and sufficient conditions on the context-generating process for universal consistency to be possible. Second, we show that there always exists an algorithm that guarantees universal consistency whenever this is achievable, called an optimistically universal learning rule. Interestingly, for finite action spaces, learnable processes for universal learning are exactly the same as in the full-feedback setting of supervised learning, previously studied in the literature. In other words, learning can be performed with partial feedback without any generalization cost. The algorithms balance a trade-off between generalization (similar to structural risk minimization) and personalization (tailoring actions to specific contexts). Lastly, we consider the case of added continuity assumptions on rewards and show that these lead to universal consistency for significantly larger classes of data-generating processes.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
The renewed interest from the scientific community in machine learning (ML) is opening many new areas of research. Here we focus on how novel trends in ML are providing opportunities to improve the field of computational fluid dynamics (CFD). In particular, we discuss synergies between ML and CFD that have already shown benefits, and we also assess areas that are under development and may produce important benefits in the coming years. We believe that it is also important to emphasize a balanced perspective of cautious optimism for these emerging approaches
translated by 谷歌翻译
The theory of magnitude provides a mathematical framework for quantifying and maximizing diversity. We apply this framework to formulate quality-diversity algorithms in generic dissimilarity spaces. In particular, we instantiate and demonstrate a very general version of Go-Explore with promising performance.
translated by 谷歌翻译
Physically based rendering of complex scenes can be prohibitively costly with a potentially unbounded and uneven distribution of complexity across the rendered image. The goal of an ideal level of detail (LoD) method is to make rendering costs independent of the 3D scene complexity, while preserving the appearance of the scene. However, current prefiltering LoD methods are limited in the appearances they can support due to their reliance of approximate models and other heuristics. We propose the first comprehensive multi-scale LoD framework for prefiltering 3D environments with complex geometry and materials (e.g., the Disney BRDF), while maintaining the appearance with respect to the ray-traced reference. Using a multi-scale hierarchy of the scene, we perform a data-driven prefiltering step to obtain an appearance phase function and directional coverage mask at each scale. At the heart of our approach is a novel neural representation that encodes this information into a compact latent form that is easy to decode inside a physically based renderer. Once a scene is baked out, our method requires no original geometry, materials, or textures at render time. We demonstrate that our approach compares favorably to state-of-the-art prefiltering methods and achieves considerable savings in memory for complex scenes.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译