In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs. While much of the literature has focused on discounted MDPs, robust average-reward MDPs remain largely unexplored. In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set. We first take an approach that approximates average-reward MDPs using discounted MDPs. We prove that the robust discounted value function converges to the robust average-reward as the discount factor $\gamma$ goes to $1$, and moreover, when $\gamma$ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward. We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum. Then, we investigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step. We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.
translated by 谷歌翻译
电动汽车(EV)在自动启动的按需(AMOD)系统中起关键作用,但是它们的独特充电模式增加了AMOD系统中的模型不确定性(例如,状态过渡概率)。由于通常存在训练和测试(真)环境之间的不匹配,因此将模型不确定性纳入系统设计至关重要。但是,在现有文献重新平衡的EV AMOD系统中,尚未明确考虑模型不确定性,并且仍然是一项紧急和挑战的任务。在这项工作中,我们为EV重新平衡和充电问题设计了一个强大而有限的多机构增强学习(MARL)框架。然后,我们提出了一种强大且受限的MARL算法(Rocoma),该算法训练了强大的EV重新平衡政策,以平衡供需比率和整个城市的充电利用率在国家过渡不确定性下。实验表明,Rocoma可以学习有效且强大的重新平衡政策。当存在模型不确定性时,它的表现优于非稳定MAL方法。它使系统公平性增加了19.6%,并使重新平衡成本降低了75.8%。
translated by 谷歌翻译
受限的强化学习是最大程度地提高预期奖励受到公用事业/成本的限制。但是,由于建模错误,对抗性攻击,非平稳性,训练环境可能与测试环境不一样,导致严重的性能降级和更重要的违反约束。我们提出了一个在模型不确定性下的强大约束强化学习框架,其中MDP不是固定的,而是在某些不确定性集中,目的是确保在不确定性集中满足所有MDP的限制,并最大程度地满足对公用事业/成本的限制不确定性集中最差的奖励性能。我们设计了一种强大的原始双重方法,并在理论上进一步发展了其收敛性,复杂性和可行性的保证。然后,我们研究了$ \ delta $ - 污染不确定性集的具体示例,设计一种在线且无模型的算法,并理论上表征了其样本复杂性。
translated by 谷歌翻译
具有线性函数近似的贪婪GQ,最初在\ cite {maei2010toward}中提出,是一种基于价值的基础外算法,用于增强增强学习中的最佳控制,并且具有非线性的两个时间尺度结构,具有非convex目标函数。本文开发其有限的时间误差范围。我们表明,贪婪的GQ算法在I.I.D. \ serat和$ \ Mathcal {O}下({\ log t}({\ log t})下,贪婪的算法的收敛如$ \ Mathcal {O}({1}/{{1}/{\ sqrt {t}})$ /{\ sqrt {t}})$在马尔可夫设置下。我们进一步设计了使用嵌套环方法的香草贪婪-GQ算法的变体,并证明其样品复杂性为$ \ Mathcal {o}({\ log(1/\ epsilon)\ Epsilon^epsilon^{ - 2}}}}}} )$,与香草贪婪的GQ之一相匹配。我们的有限时间误差界限与用于一般平滑非凸优化问题的随机梯度下降算法之一匹配。我们的有限样本分析提供了理论指南,以选择在实践中选择更快的融合的步骤尺寸,并建议在收敛速度和获得的政策质量之间进行权衡。本文我们的技术提供了一种通用方法,用于对非凸的两个基于时值的强化学习算法进行有限样本分析。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Vertical Federated Learning (VFL) is widely utilized in real-world applications to enable collaborative learning while protecting data privacy and safety. However, previous works show that parties without labels (passive parties) in VFL can infer the sensitive label information owned by the party with labels (active party) or execute backdoor attacks to VFL. Meanwhile, active party can also infer sensitive feature information from passive party. All these pose new privacy and security challenges to VFL systems. We propose a new general defense method which limits the mutual information between private raw data, including both features and labels, and intermediate outputs to achieve a better trade-off between model utility and privacy. We term this defense Mutual Information Regularization Defense (MID). We theoretically and experimentally testify the effectiveness of our MID method in defending existing attacks in VFL, including label inference attacks, backdoor attacks and feature reconstruction attacks.
translated by 谷歌翻译
Video semantic segmentation (VSS) is beneficial for dealing with dynamic scenes due to the continuous property of the real-world environment. On the one hand, some methods alleviate the predicted inconsistent problem between continuous frames. On the other hand, other methods employ the previous frame as the prior information to assist in segmenting the current frame. Although the previous methods achieve superior performances on the independent and identically distributed (i.i.d) data, they can not generalize well on other unseen domains. Thus, we explore a new task, the video generalizable semantic segmentation (VGSS) task that considers both continuous frames and domain generalization. In this paper, we propose a class-wise non-salient region generalized (CNSG) framework for the VGSS task. Concretely, we first define the class-wise non-salient feature, which describes features of the class-wise non-salient region that carry more generalizable information. Then, we propose a class-wise non-salient feature reasoning strategy to select and enhance the most generalized channels adaptively. Finally, we propose an inter-frame non-salient centroid alignment loss to alleviate the predicted inconsistent problem in the VGSS task. We also extend our video-based framework to the image-based generalizable semantic segmentation (IGSS) task. Experiments demonstrate that our CNSG framework yields significant improvement in the VGSS and IGSS tasks.
translated by 谷歌翻译
The stock market prediction has been a traditional yet complex problem researched within diverse research areas and application domains due to its non-linear, highly volatile and complex nature. Existing surveys on stock market prediction often focus on traditional machine learning methods instead of deep learning methods. Deep learning has dominated many domains, gained much success and popularity in recent years in stock market prediction. This motivates us to provide a structured and comprehensive overview of the research on stock market prediction focusing on deep learning techniques. We present four elaborated subtasks of stock market prediction and propose a novel taxonomy to summarize the state-of-the-art models based on deep neural networks from 2011 to 2022. In addition, we also provide detailed statistics on the datasets and evaluation metrics commonly used in the stock market. Finally, we highlight some open issues and point out several future directions by sharing some new perspectives on stock market prediction.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译