The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This work considers the sample complexity of obtaining an $\varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP), given access to a generative model (simulator). When the ground-truth MDP is weakly communicating, we prove an upper bound of $\widetilde O(H \varepsilon^{-3} \ln \frac{1}{\delta})$ samples per state-action pair, where $H := sp(h^*)$ is the span of bias of any optimal policy, $\varepsilon$ is the accuracy and $\delta$ is the failure probability. This bound improves the best-known mixing-time-based approaches in [Jin & Sidford 2021], which assume the mixing-time of every deterministic policy is bounded. The core of our analysis is a proper reduction bound from AMDP problems to discounted MDP (DMDP) problems, which may be of independent interests since it allows the application of DMDP algorithms for AMDP in other settings. We complement our upper bound by proving a minimax lower bound of $\Omega(|\mathcal S| |\mathcal A| H \varepsilon^{-2} \ln \frac{1}{\delta})$ total samples, showing that a linear dependent on $H$ is necessary and that our upper bound matches the lower bound in all parameters of $(|\mathcal S|, |\mathcal A|, H, \ln \frac{1}{\delta})$ up to some logarithmic factors.
translated by 谷歌翻译
In this paper, we address the stochastic contextual linear bandit problem, where a decision maker is provided a context (a random set of actions drawn from a distribution). The expected reward of each action is specified by the inner product of the action and an unknown parameter. The goal is to design an algorithm that learns to play as close as possible to the unknown optimal policy after a number of action plays. This problem is considered more challenging than the linear bandit problem, which can be viewed as a contextual bandit problem with a \emph{fixed} context. Surprisingly, in this paper, we show that the stochastic contextual problem can be solved as if it is a linear bandit problem. In particular, we establish a novel reduction framework that converts every stochastic contextual linear bandit instance to a linear bandit instance, when the context distribution is known. When the context distribution is unknown, we establish an algorithm that reduces the stochastic contextual instance to a sequence of linear bandit instances with small misspecifications and achieves nearly the same worst-case regret bound as the algorithm that solves the misspecified linear bandit instances. As a consequence, our results imply a $O(d\sqrt{T\log T})$ high-probability regret bound for contextual linear bandits, making progress in resolving an open problem in (Li et al., 2019), (Li et al., 2021). Our reduction framework opens up a new way to approach stochastic contextual linear bandit problems, and enables improved regret bounds in a number of instances including the batch setting, contextual bandits with misspecifications, contextual bandits with sparse unknown parameters, and contextual bandits with adversarial corruption.
translated by 谷歌翻译
与表征解决马尔可夫决策过程(MDP)样品复杂性的进步相反,解决约束MDP(CMDP)的最佳统计复杂性仍然未知。我们通过在折扣CMDP中学习近乎最佳策略的样本复杂性上的最小上限和下限来解决这个问题,并访问生成模型(模拟器)。特别是,我们设计了一种基于模型的算法,该算法解决了两个设置:(i)允许违反小小的约束的可行性,以及(ii)严格的可行性,其中需要输出策略来满足约束。对于(i),我们证明我们的算法通过制作$ \ tilde {o} \ left(\ frac {s a \ log(1/\ delta)来返回带有概率$ 1- \ delta $的$ \ epsilon $ - 优势策略} {(1- \ gamma)^3 \ epsilon^2} \ right)$ QUERIES $ QUERIES与生成模型相匹配,因此与无约束的MDP的样品复杂性匹配。对于(ii),我们表明该算法的样本复杂性是由$ \ tilde {o} \ left(\ frac {s a a \ log,\ log(1/\ delta)} {(1 - \ gamma)^5 \,\ epsilon^2 \ zeta^2} \ right)$,其中$ \ zeta $是与问题相关的slater常数,其特征是可行区域的大小。最后,我们证明了严格的可行性设置的匹配较低限制,因此获得了折扣CMDP的第一个最小值最佳界限。我们的结果表明,在允许违反小小的约束时,学习CMDP与MDP一样容易,但是当我们要求零约束违规时,本质上更加困难。
translated by 谷歌翻译
上下文线性土匪是具有许多实际应用的丰富且理论上重要的模型。最近,这种设置对无线的应用程序引起了很多兴趣,在无线上,通信限制可能是性能瓶颈,尤其是当上下文来自大型$ d $维空间时。在本文中,我们考虑了一个分布式的无记忆上下文线性匪徒学习问题,在该问题中,观察上下文并采取行动的代理人在地理上与学习中的学习者而在看不到上下文的同时分离。我们假设上下文是从分布中生成的,并提出了一种方法,该方法对于未知上下文分布的情况使用$ \ \ 5D $位,如果已知上下文分布,则每上下文$ 0 $ bits $ 0 $位,同时实现了几乎相同的遗憾。好像可以直接观察到上下文。前者的界限通过$ \ log(t)$因素在现有界限上进行了改进,其中$ t $是地平线的长度,而后者则达到了信息理论的紧密度。
translated by 谷歌翻译
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the communication cost of any algorithm that performs optimally in a regret minimization setup. We then propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server. We prove that the communication cost of DisBE-LUCB matches our lower bound up to logarithmic factors. In particular, for scenarios with known context distribution, the communication cost of DisBE-LUCB is only $\tilde{\mathcal{O}}(dN)$ and its regret is ${\tilde{\mathcal{O}}}(\sqrt{dNT})$, which is of the same order as that incurred by an optimal single-agent algorithm for $NT$ rounds. We also provide similar bounds for practical settings where the context distribution can only be estimated. Therefore, our proposed algorithm is nearly minimax optimal in terms of \emph{both regret and communication cost}. Finally, we propose DecBE-LUCB, a fully decentralized version of DisBE-LUCB, which operates without a central server, where agents share information with their \emph{immediate neighbors} through a carefully designed consensus procedure.
translated by 谷歌翻译
多武装强盗(MAB)问题是一个主动学习框架,其旨在通过顺序观察奖励来选择一组动作中最好的选择。最近,它已经在无线网络上的许多应用程序流行,其中通信约束可以形成瓶颈。现有的作品通常无法解决此问题,并且可以在某些应用中变得不可行。在本文中,我们通过优化分布式代理收集的奖励的通信来解决沟通问题。通过提供近乎匹配的上限和下限,我们紧紧地表征了学习者每次奖励所需的比特数,以便在不遭受额外遗憾的情况下准确学习。特别是,我们建立了一个通用奖励量化算法,可以应用于任何(无遗憾)MAB算法的顶部,以形成新的通信有效的对应物,这只需要几个(低至3)位每次迭代时会发送,同时保留相同的遗憾。我们的下限是通过构建来自SubGaussian分布的硬实例来建立。我们的理论在数值实验中进一步证实。
translated by 谷歌翻译
最近有兴趣了解地平线依赖于加固学习(RL)的样本复杂性。值得注意的是,对于具有Horizo​​ n长度$ H $的RL环境,之前的工作表明,使用$ \ mathrm {polylog}(h)有可能学习$ o(1)$ - 最佳策略的可能大致正确(pac)算法$当州和行动的数量固定时的环境交互剧集。它尚不清楚$ \ mathrm {polylog}(h)$依赖性是必要的。在这项工作中,我们通过开发一种算法来解决这个问题,该算法在仅使用ONTO(1)美元的环境交互的同时实现相同的PAC保证,完全解决RL中样本复杂性的地平线依赖性。我们通过(i)在贴现和有限地平线马尔可夫决策过程(MDP)和(ii)在MDP中的新型扰动分析中建立价值函数之间的联系。我们相信我们的新技术具有独立兴趣,可在RL中应用相关问题。
translated by 谷歌翻译
尽管基于模型的增强学习(RL)方法被认为是更具样本的高效,但现有算法通常依赖于复杂的规划算法与模型学习过程紧密粘合。因此,学习模型可能缺乏与更专业规划者重新使用的能力。在本文中,我们解决了这个问题,并提供了在没有奖励信号的指导的情况下有效地学习RL模型的方法。特别是,我们采取了一个插件求解器方法,我们专注于在探索阶段学习模型,并要求在学习模型上的\ emph {任何规划算法}可以给出近最佳的政策。具体而言,我们专注于线性混合MDP设置,其中概率转换矩阵是一组现有模型的(未知)凸面组合。我们表明,通过建立新的探索算法,即插即用通过\ tilde {o}来学习模型(d ^ 2h ^ 3 / epsilon ^ 2)$与环境交互,\ emph {任何} $ \ epsilon $ -optimal Planner在模型上给出$ O(\ epsilon)$ - 原始模型上的最佳政策。此示例复杂性与非插入方法的下限与下限匹配,并且是\ EMPH {统计上最佳}。我们通过利用使用伯尔斯坦不等式和指定的线性混合MDP的属性来实现仔细的最大总差异来实现这一结果。
translated by 谷歌翻译
在本文中,我们考虑了一个新的多武器强盗(mab)问题,其中武器是未知且可能更改图的节点,并且代理(i)通过拉动手臂启动随机行走,(ii)观察随机步行轨迹,(iii)获得等于步行长度的奖励。我们通过研究随机环境和对抗性环境,对这个问题提供了全面的理解。我们表明,在信息理论意义上,尽管可以通过随机步行轨迹获得其他信息,但在信息理论意义上,这个问题并不比标准mAB容易。还研究了强盗算法在此问题上的行为。
translated by 谷歌翻译