智能城市的智能交通灯可以最佳地减少交通拥堵。在这项研究中,我们采用了加强学习,培训了城市移动模拟器的红绿灯的控制代理。由于现有工程的差异,除了基于价值的方法之外,利用基于策略的深度加强学习方法,近端策略优化(PPO),例如Deep Q网络(DQN)和双DQN(DDQN)。首先,将获得PPO的最佳政策与来自DQN和DDQN的PPO相比。发现PPO的政策比其他政策更好。接下来,而不是固定间隔的流量光阶段,我们采用具有可变时间间隔的光相位,这导致更好的策略来传递流量流。然后,研究了环境和行动干扰的影响,以展示基于学习的控制器是强大的。最后,我们考虑不平衡的交通流量,并发现智能流量可以适度地对不平衡的流量方案执行,尽管它仅从平衡流量方案中了解最佳策略。
translated by 谷歌翻译
具有高级别规格的自治系统的运动规划具有广泛的应用。然而,涉及定时时间逻辑的正式语言的研究仍在调查中。此外,许多现有结果依赖于用户指定的任务在给定环境中可行的关键假设。当操作环境是动态和未知的挑战时,由于环境可以找到禁止,导致预先定时定时任务无法完全满足潜在冲突的任务。在考虑时间束缚要求时,这些问题变得更具挑战性。为了解决这些挑战,这项工作提出了一种控制框架,其考虑了强制限制来强制执行安全要求和软限制,以启用任务放松。使用度量间隔时间逻辑(MITL)规范来处理时间限制约束。通过构建轻松的定时产品自动机,在线运动规划策略与后退地平线控制器合成以产生政策,以减少优先顺序的降低方式实现多重目标1)正式保证了对硬安全限制的满足感; 2)主要满足软定时任务; 3)尽可能收集时变奖励。放松结构的另一个新颖性是考虑违反时间和任务的不可行情况。提供仿真结果以验证所提出的方法。
translated by 谷歌翻译
本文研究了Markov决策过程(MDP)建模的自主动态系统的运动规划,在连续状态和动作空间上具有未知的过渡概率。线性时间逻辑(LTL)用于指定无限地平线上的高级任务,可以转换为具有几种接受集的极限确定性广义B \“UCHI Automaton(LDGBA)。新颖性是设计嵌入式产品MDP(通过结合同步跟踪 - 前沿函数来记录自动化的同步跟踪 - 前沿函数,并促进接受条件的满足感。基于LDGBA的奖励塑造和折扣方案的模型的满足 - 免费加强学习(RL)仅取决于EP-MDP状态,并可以克服稀疏奖励的问题。严格的分析表明,任何优化预期折扣返回的RL方法都保证找到最佳策略,其迹线最大化满意度概率。然后开发模块化深度确定性政策梯度(DDPG)以在连续状态和行动空间上生成此类策略。我们的f Ramework通过一系列Openai健身房环境进行评估。
translated by 谷歌翻译
本文研究了运动和环境不确定性的最佳运动规划。通过将系统建模作为概率标记的马尔可夫决策过程(PL-MDP),控制目标是合成有限内存策略,在该策略下,该代理满足具有所需满足的线性时间逻辑(LTL)的高级复杂任务可能性。特别地,考虑了满足无限地平线任务的轨迹的成本优化,分析了降低预期平均成本和最大化任务满意度概率之间的权衡。而不是使用传统的Rabin Automata,LTL公式被转换为限制确定性的B \“UCHI自动机(LDBA),其具有更直接的接受条件和更紧凑的图形结构。这项工作的新颖性在于考虑案件LTL规范可能是不可行的,并且在PL-MDP和LDBA之间的轻松产品MDP的开发可能是不可行的和开发。放松的产品MDP允许代理在任务不完全可行的情况下进行修改其运动计划,并量化修订计划的违规测量。然后配制多目标优化问题,共同考虑任务满意度的概率,违反原始任务限制的违规以及策略执行的实施成本,通过耦合的线性计划解决。据最好我们的知识,它是第一个弥合规划修订版和计划前缀和计划的最佳控制合成之间的差距的工作在无限地平线上修复代理轨迹。提供实验结果以证明所提出的框架的有效性。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译