Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of $\mathcal{O}(C)$, where $C$ is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.
translated by 谷歌翻译
本文提出了一个算法框架,用于控制符合信号时间逻辑(STL)规范的连续动力系统的合成。我们提出了一种新型算法,以从STL规范中获得时间分配的有限自动机,并引入一个多层框架,该框架利用此自动机以空间和时间上指导基于采样的搜索树。我们的方法能够合成非线性动力学和多项式谓词功能的控制器。我们证明了算法的正确性和概率完整性,并说明了我们在几个案例研究中框架的效率和功效。我们的结果表明,在艺术状态下,速度的速度是一定的。
translated by 谷歌翻译
本文介绍了一个混合在线的部分可观察到的马尔可夫决策过程(POMDP)计划系统,该系统在存在环境中其他代理商引入的多模式不确定性的情况下解决了自主导航的问题。作为一个特别的例子,我们考虑了密集的行人和障碍物中的自主航行问题。该问题的流行方法首先使用完整的计划者(例如,混合A*)生成一条路径,具有对不确定性的临时假设,然后使用基于在线树的POMDP求解器来解决问题的不确定性,并控制问题的有限方面(即沿着路径的速度)。我们提出了一种更有能力和响应的实时方法,使POMDP规划师能够控制更多的自由度(例如,速度和标题),以实现更灵活,更有效的解决方案。这种修改大大扩展了POMDP规划师必须推荐的国家空间区域,从而大大提高了在实时控制提供的有限计算预算中找到有效的推出政策的重要性。我们的关键见解是使用多Query运动计划技术(例如,概率路线图或快速行进方法)作为先验,以快速生成在有限的地平线搜索中POMDP规划树可能达到的每个状态的高效推出政策。我们提出的方法产生的轨迹比以前的方法更安全,更有效,即使在较长的计划范围内密集拥挤的动态环境中。
translated by 谷歌翻译
在本文中,我们通过概率保证解决了基于采样的运动计划和测量不确定性的问题。我们概括了基于基于树的基于树木的运动计划算法,以确定性系统并提出信念-USHAMCAL {a} $,该框架将任何基于动力学的树的计划者扩展到线性(或可线化)系统的信念空间。我们为信仰空间介绍了适当的抽样技术和距离指标,以保留基础规划师的概率完整性和渐近最佳性能。我们证明了我们在模拟方面对自动化和非全面系统有效和渐近地找到安全低成本路径的疗效。
translated by 谷歌翻译
部分观察到的马尔可夫决策过程(POMDP)是一种强大的框架,用于捕获涉及状态和转换不确定性的决策问题。然而,大多数目前的POMDP规划者不能有效地处理它们经常在现实世界中遇到的非常高的观测(例如,机器人域中的图像观察)。在这项工作中,我们提出了视觉树搜索(VTS),一个学习和规划过程,将生成模型与基于在线模型的POMDP规划的脱机中学到的。 VTS通过利用一组深入生成观测模型来预测和评估蒙特卡罗树搜索计划员的图像观测的可能性,乘坐脱机模型培训和在线规划。我们展示VTS对不同观察噪声的强大稳健,因为它利用在线,基于模型的规划,可以适应不同的奖励结构,而无需重新列车。这种新方法优于基线最先进的策略计划算法,同时使用显着降低的离线培训时间。
translated by 谷歌翻译
这项工作研究了以下假设:与人类驾驶状态的部分可观察到的马尔可夫决策过程(POMDP)计划可以显着提高自动高速公路驾驶的安全性和效率。我们在模拟场景中评估了这一假设,即自动驾驶汽车必须在快速连续中安全执行三个车道变化。通过观测扩大(POMCPOW)算法,通过部分可观察到的蒙特卡洛计划获得了近似POMDP溶液。这种方法的表现优于过度自信和保守的MDP基准,匹配或匹配效果优于QMDP。相对于MDP基准,POMCPOW通常将不安全情况的速率降低了一半或将成功率提高50%。
translated by 谷歌翻译
Course load analytics (CLA) inferred from LMS and enrollment features can offer a more accurate representation of course workload to students than credit hours and potentially aid in their course selection decisions. In this study, we produce and evaluate the first machine-learned predictions of student course load ratings and generalize our model to the full 10,000 course catalog of a large public university. We then retrospectively analyze longitudinal differences in the semester load of student course selections throughout their degree. CLA by semester shows that a student's first semester at the university is among their highest load semesters, as opposed to a credit hour-based analysis, which would indicate it is among their lowest. Investigating what role predicted course load may play in program retention, we find that students who maintain a semester load that is low as measured by credit hours but high as measured by CLA are more likely to leave their program of study. This discrepancy in course load is particularly pertinent in STEM and associated with high prerequisite courses. Our findings have implications for academic advising, institutional handling of the freshman experience, and student-facing analytics to help students better plan, anticipate, and prepare for their selected courses.
translated by 谷歌翻译
We present a differentiable formulation of rigid-body contact dynamics for objects and robots represented as compositions of convex primitives. Existing optimization-based approaches simulating contact between convex primitives rely on a bilevel formulation that separates collision detection and contact simulation. These approaches are unreliable in realistic contact simulation scenarios because isolating the collision detection problem introduces contact location non-uniqueness. Our approach combines contact simulation and collision detection into a unified single-level optimization problem. This disambiguates the collision detection problem in a physics-informed manner. Compared to previous differentiable simulation approaches, our formulation features improved simulation robustness and a reduction in computational complexity by more than an order of magnitude. We illustrate the contact and collision differentiability on a robotic manipulation task requiring optimization-through-contact. We provide a numerically efficient implementation of our formulation in the Julia language called Silico.jl.
translated by 谷歌翻译
As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model's capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.
translated by 谷歌翻译
Rearrangement puzzles are variations of rearrangement problems in which the elements of a problem are potentially logically linked together. To efficiently solve such puzzles, we develop a motion planning approach based on a new state space that is logically factored, integrating the capabilities of the robot through factors of simultaneously manipulatable joints of an object. Based on this factored state space, we propose less-actions RRT (LA-RRT), a planner which optimizes for a low number of actions to solve a puzzle. At the core of our approach lies a new path defragmentation method, which rearranges and optimizes consecutive edges to minimize action cost. We solve six rearrangement scenarios with a Fetch robot, involving planar table puzzles and an escape room scenario. LA-RRT significantly outperforms the next best asymptotically-optimal planner by 4.01 to 6.58 times improvement in final action cost.
translated by 谷歌翻译