Lifelong learning aims to create AI systems that continuously and incrementally learn during a lifetime, similar to biological learning. Attempts so far have met problems, including catastrophic forgetting, interference among tasks, and the inability to exploit previous knowledge. While considerable research has focused on learning multiple input distributions, typically in classification, lifelong reinforcement learning (LRL) must also deal with variations in the state and transition distributions, and in the reward functions. Modulating masks, recently developed for classification, are particularly suitable to deal with such a large spectrum of task variations. In this paper, we adapted modulating masks to work with deep LRL, specifically PPO and IMPALA agents. The comparison with LRL baselines in both discrete and continuous RL tasks shows competitive performance. We further investigated the use of a linear combination of previously learned masks to exploit previous knowledge when learning new tasks: not only is learning faster, the algorithm solves tasks that we could not otherwise solve from scratch due to extremely sparse rewards. The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.
translated by 谷歌翻译
任何机器学习解决方案的鲁棒性从根本上受到培训的数据的约束。超越原始培训的一种方法是通过对原始数据集的人为增强。但是,不可能指定部署过程中可能发生的所有可能发生的故障案例。为了解决这个限制,我们将基于模型的增强学习和模型解关方法结合在一起,提出了一种解决方案,该解决方案会自我生成的模拟场景受到环境概念和动态的约束,并以无聊的方式学习。特别是,代理环境的内部模型是基于对代理动作敏感的输入空间的低维概念表示。我们在简单的点对点导航任务中在标准逼真的驾驶模拟器中演示了这种方法,我们在其中显示了与指定失败情况不同实例的一击概括以及与相似变化的零弹性概括相比,我们显示出巨大的改进。基于模型和无模型的方法。
translated by 谷歌翻译
当在培训期间遇到的人的分布范围内提交输入时,加固学习代理表现良好。但是,在面对新颖的,出现外部事件之前,他们无法有效地响应,直到他们经历了额外的培训。本文介绍了在线,数据驱动,紧急响应方法,旨在提供自主代理对与其接受培训或设计的意外情况的意外情况反应的能力。在这种情况下,由于在这些新颖情况下获得的观察结果将落在转让代理已经优化以处理来处理的投入的分布之外,因此无法妥善执行。通过选择最小化来自变形自动编码器的重建误差的增加率的动作,所提出的方法顺序地对未预见的情况进行了定制响应。使用修改后的贝叶斯优化过程,以数据有效的方式(大约30个数据点)在线实现该优化。我们展示了这种方法在模拟的3D车驾驶场景中的潜力,其中代理在2秒内投入响应,以避免与其在训练期间没有看到的对象的碰撞。
translated by 谷歌翻译
元钢筋学习(Meta-RL)算法使得能够快速适应动态环境中的少量样本的任务。通过代理策略网络中的动态表示(通过推理关于任务上下文,模型参数更新或两者)获得的动态表示来实现这样的壮举。然而,由于在策略网络上满足不同的政策,因此获得了超越简单基准问题的快速适应的丰富动态表示是具有挑战性的。本文通过将神经调节引入模块化组件来解决挑战,以增加调节神经元活动的标准策略网络,以便为任务适应提供有效的动态表示。策略网络的建议扩展是在越来越复杂的多个离散和连续控制环境中进行评估。为了证明在Meta-R1中的延伸的一般性和益处,将神经调序的网络应用于两个最先进的META-RL算法(胱瓦和珍珠)。结果表明,与基线相比,通过神经调节增强的Meta-R1产生明显更好的结果和更丰富的动态表示。
translated by 谷歌翻译
In this paper, a complete framework for Autonomous Self Driving is implemented. LIDAR, Camera and IMU sensors are used together. The entire data communication is managed using Robot Operating System which provides a robust platform for implementation of Robotics Projects. Jetson Nano is used to provide powerful on-board processing capabilities. Sensor fusion is performed on the data received from the different sensors to improve the accuracy of the decision making and inferences that we derive from the data. This data is then used to create a localized map of the environment. In this step, the position of the vehicle is obtained with respect to the Mapping done using the sensor data.The different SLAM techniques used for this purpose are Hector Mapping and GMapping which are widely used mapping techniques in ROS. Apart from SLAM that primarily uses LIDAR data, Visual Odometry is implemented using a Monocular Camera. The sensor fused data is then used by Adaptive Monte Carlo Localization for car localization. Using the localized map developed, Path Planning techniques like "TEB planner" and "Dynamic Window Approach" are implemented for autonomous navigation of the vehicle. The last step in the Project is the implantation of Control which is the final decision making block in the pipeline that gives speed and steering data for the navigation that is compatible with Ackermann Kinematics. The implementation of such a control block under a ROS framework using the three sensors, viz, LIDAR, Camera and IMU is a novel approach that is undertaken in this project.
translated by 谷歌翻译
As the demand for autonomous driving increases, it is paramount to ensure safety. Early accident prediction using deep learning methods for driving safety has recently gained much attention. In this task, early accident prediction and a point prediction of where the drivers should look are determined, with the dashcam video as input. We propose to exploit the double actors and regularized critics (DARC) method, for the first time, on this accident forecasting platform. We derive inspiration from DARC since it is currently a state-of-the-art reinforcement learning (RL) model on continuous action space suitable for accident anticipation. Results show that by utilizing DARC, we can make predictions 5\% earlier on average while improving in multiple metrics of precision compared to existing methods. The results imply that using our RL-based problem formulation could significantly increase the safety of autonomous driving.
translated by 谷歌翻译
Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.
translated by 谷歌翻译
Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots in an ongoing dialogue. Existing approaches use hand-crafted templates and additional slot information to fine-tune and prompt large pre-trained language models and elicit slot values from the dialogue context. Significant manual effort and domain knowledge is required to design effective prompts, limiting the generalizability of these approaches to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates. Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings using a much smaller model, thereby providing an important advantage for real-world deployments that often have limited resource availability.
translated by 谷歌翻译
Deep learning based text-to-speech (TTS) systems have been evolving rapidly with advances in model architectures, training methodologies, and generalization across speakers and languages. However, these advances have not been thoroughly investigated for Indian language speech synthesis. Such investigation is computationally expensive given the number and diversity of Indian languages, relatively lower resource availability, and the diverse set of advances in neural TTS that remain untested. In this paper, we evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages. Based on this, we identify monolingual models with FastPitch and HiFi-GAN V1, trained jointly on male and female speakers to perform the best. With this setup, we train and evaluate TTS models for 13 languages and find our models to significantly improve upon existing models in all languages as measured by mean opinion scores. We open-source all models on the Bhashini platform.
translated by 谷歌翻译
Harnessing the benefits of drones for urban innovation at scale requires reliable aerial autonomy. One major barrier to advancing aerial autonomy has been collecting large-scale aerial datasets for training machine learning models. Due to costly and time-consuming real-world data collection through deploying drones, there has been an increasing shift towards using synthetic data for training models in drone applications. However, to increase generalizability of trained policies on synthetic data, incorporating domain randomization into the data generation workflow for addressing the sim-to-real problem becomes crucial. Current synthetic data generation tools either lack domain randomization or rely heavily on manual workload or real samples for configuring and generating diverse realistic simulation scenes. These dependencies limit scalability of the data generation workflow. Accordingly, there is a major challenge in balancing generalizability and scalability in synthetic data generation. To address these gaps, we introduce a modular scalable data generation workflow tailored to aerial autonomy applications. To generate realistic configurations of simulation scenes while increasing diversity, we present an adaptive layered domain randomization approach that creates a type-agnostic distribution space for assets over the base map of the environments before pose generation for drone trajectory. We leverage high-level scene structures to automatically place assets in valid configurations and then extend the diversity through obstacle generation and global parameter randomization. We demonstrate the effectiveness of our method in automatically generating diverse configurations and datasets and show its potential for downstream performance optimization. Our work contributes to generating enhanced benchmark datasets for training models that can generalize better to real-world situations.
translated by 谷歌翻译