最近集成了多源胸X射线数据集以改进自动诊断的趋势提出了模型学会利用源特定的相关性以通过识别图像的源域而不是医学病理来提高性能。我们假设这种效果由源区,即对应于源的疾病的患病率来强制执行并利用标记 - 不平衡。因此,在这项工作中,我们彻底研究了Lable-angalance对多源训练的影响,以便在广泛使用的Chestx-ray14和Chexpert数据集上进行肺炎检测任务。结果强调并强调了使用更忠实和透明的自解释模型进行自动诊断的重要性,从而实现了对杂志学习的固有检测。他们进一步说明了在确保标签平衡的源域数据集时可以显着降低学习虚假相关的这种不希望的效果。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We revisit a simple Learning-from-Scratch baseline for visuo-motor control that uses data augmentation and a shallow ConvNet. We find that this baseline has competitive performance with recent methods that leverage frozen visual representations trained on large-scale vision datasets.
translated by 谷歌翻译
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl
translated by 谷歌翻译
Graph Neural Networks (GNNs) are deep learning models designed to process attributed graphs. GNNs can compute cluster assignments accounting both for the vertex features and for the graph topology. Existing GNNs for clustering are trained by optimizing an unsupervised minimum cut objective, which is approximated by a Spectral Clustering (SC) relaxation. SC offers a closed-form solution that, however, is not particularly useful for a GNN trained with gradient descent. Additionally, the SC relaxation is loose and yields overly smooth cluster assignments, which do not separate well the samples. We propose a GNN model that optimizes a tighter relaxation of the minimum cut based on graph total variation (GTV). Our model has two core components: i) a message-passing layer that minimizes the $\ell_1$ distance in the features of adjacent vertices, which is key to achieving sharp cluster transitions; ii) a loss function that minimizes the GTV in the cluster assignments while ensuring balanced partitions. By optimizing the proposed loss, our model can be self-trained to perform clustering. In addition, our clustering procedure can be used to implement graph pooling in deep GNN architectures for graph classification. Experiments show that our model outperforms other GNN-based approaches for clustering and graph pooling.
translated by 谷歌翻译
We discuss a platform that has both software and hardware components, and whose purpose is to support research into characterizing and mitigating the sim-to-real gap in robotics and vehicle autonomy engineering. The software is operating-system independent and has three main components: a simulation engine called Chrono, which supports high-fidelity vehicle and sensor simulation; an autonomy stack for algorithm design and testing; and a development environment that supports visualization and hardware-in-the-loop experimentation. The accompanying hardware platform is a 1/6th scale vehicle augmented with reconfigurable mountings for computing, sensing, and tracking. Since this vehicle platform has a digital twin within the simulation environment, one can test the same autonomy perception, state estimation, or controls algorithms, as well as the processors they run on, in both simulation and reality. A demonstration is provided to show the utilization of this platform for autonomy research. Future work will concentrate on augmenting ART/ATK with support for a full-sized Chevy Bolt EUV, which will be made available to this group in the immediate future.
translated by 谷歌翻译
我们在本文中介绍了我们认为是视频游戏机翻译的首次尝试之一。我们的研究表明,只有有限的内域数据训练的模型超出了可公开可用的系统,随后的人类评估揭示了最终翻译中的有趣发现。本文的第一部分介绍了视频游戏翻译的一些挑战,一些现有文献以及本实验中使用的系统和数据集。最后一节讨论了我们对所得翻译的分析以及这种自动化系统的潜在好处。一个这样的发现突出了该模型学习从英语到法语的视频游戏翻译的典型规则和模式的能力。因此,我们的结论表明,鉴于令人鼓舞的结果,工作的高度重复性以及翻译人员在该领域中通常不良的工作条件,视频游戏机译的具体情况可能非常有用。但是,与文化部门中MT的其他用例一样,我们认为这在很大程度上取决于该工具的适当实施,该工具应与人类翻译人员进行交互方式来刺激创造力,而不是为了生产力而不是原始的后编辑。
translated by 谷歌翻译
神经网络修剪可以有效地用于压缩自动语音识别(ASR)模型。但是,在多语言ASR中,执行语言不足的修剪可能会导致某些语言的严重性能降解,因为语言 - 敏捷的修剪口罩可能不符合所有语言,并丢弃了重要的语言特定参数。在这项工作中,我们提出了ASR路径,这是一种稀疏的多语言ASR模型,该模型激活了特定语言的子网络(“路径”),从而明确地学习了每种语言的参数。通过重叠的子网络,共享参数还可以通过联合多语言培训来实现较低资源语言的知识传输。我们提出了一种新型算法来学习ASR途径,并通过流式RNN-T模型评估了4种语言的建议方法。我们提出的ASR途径的表现都优于密集模型(平均-5.0%)和语言不足的修剪模型(平均-21.4%),并且与单语稀疏模型相比,低资源语言的性能更好。
translated by 谷歌翻译
加速的MRI从稀疏采样的信号数据中重建了临床解剖学的图像,以减少患者扫描时间。尽管最近的作品利用了深入的学习来完成这项任务,但这种方法通常只在没有信号损坏或资源限制的模拟环境中进行了探索。在这项工作中,我们探索了神经网络MRI图像重建器的增强,以增强其临床相关性。也就是说,我们提出了一个用于检测图像源的Convnet模型,该模型可以实现分类器$ f_2 $得分为$ 79.1 \%$ $。我们还证明,具有可变加速度因子的MR信号数据的培训重建器可以在临床患者扫描期间提高其平均性能,最高$ 2 \%$。当模型学会重建多个解剖和方向的MR图像时,我们提供损失功能来克服灾难性的遗忘。最后,我们提出了一种使用模拟幻影数据在临床获取数据集和计算功能有限的情况下使用模拟幻影数据预先培训重建器的方法。我们的结果为加速MRI的临床适应提供了潜在的途径。
translated by 谷歌翻译
PyStacked通过Python的Scikit-Lear}实现了堆积的概括(Wolpert,1992),以进行回归和二进制分类。堆叠将多个监督的机器学习者(“基础”或“级别”学习者)结合到一个学习者中。当前支持的基础学习者包括正规化回归,随机森林,梯度增强的树木,支撑矢量机和前馈神经网(多层感知器)。PyStacked也可以用作“常规”机器学习程序,以适合单个基础学习者,因此为Scikit-Learn的机器学习算法提供了易于使用的API。
translated by 谷歌翻译