The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
强化学习者必须推广其培训经验。先前的工作主要集中在相同的培训和评估环境上。从最近引入的Crafter Benchmark(一个2D开放世界生存游戏)开始,我们引入了一套新的环境,适合评估某些代理商对以前看不见的(数量)对象的概括并快速适应(元学习)的能力。在Crafter中,通过培训1M步骤时,通过未锁定成就(例如收集资源)来评估代理商。我们表明,当前的代理商努力概括,并引入新颖的以对象为中心的代理,从而改善了强大的基准。我们还通过多个实验为未来在手工艺品上的工作提供了一般兴趣的关键见解。我们表明,仔细的超参数调整可以通过大幅度提高PPO基线代理,即使是前馈代理也可以通过依靠库存显示来解锁所有成就。我们在原始的手工环境中实现了新的最新性能。此外,当经过100万步的​​培训时,我们的调整代理几乎可以解锁所有成就。我们表明,即使删除了库存信息,复发性PPO代理也比进发料剂改进了。我们介绍Crafterood,这是一组15个新的环境,可以评估OOD概括。在Crafterood上,我们表明目前的代理无法概括,而我们的新颖中心的代理人实现了最新的OOD概括,同时也可以解释。我们的代码是公开的。
translated by 谷歌翻译
近年来,人类面孔的影子化化身已经走了很长一段路,但是该地区的研究受到缺乏公开可用的高质量数据集的限制。在这项工作中,我们介绍了Multiface,这是一种新的多视图,高分辨率的人脸数据集,该数据集是从13个身份的神经面部渲染研究中收集的13个身份。我们介绍了Mugsy,这是一种大型多摄像机设备,可捕获面部表现的高分辨率同步视频。 Multiface的目的是缩小学术界高质量数据的可访问性的差距,并使VR触觉研究能够进行研究。随着数据集的释放,我们对不同模型体系结构对模型的新观点和表达式的插值能力进行消融研究。通过有条件的VAE模型作为我们的基线,我们发现添加空间偏见,纹理翘曲场和残差连接可改善新型视图合成的性能。我们的代码和数据可在以下网址获得:https://github.com/facebookresearch/multiface
translated by 谷歌翻译
在过去十年中,我们目睹了深度学习的兴起,以占据人工智能领域。人工神经网络的进步与具有大的内存容量大的硬件加速器的相应进步,以及大型数据集的可用性,使能研究人员和从业者能够培训和部署复杂的神经网络模型,这些模型在几个方面实现了最先进的性能跨越计算机视觉,自然语言处理和加强学习的领域。然而,由于这些神经网络变得更大,更复杂,更广泛地使用,目前深度学习模型的基本问题变得更加明显。已知最先进的深度学习模型遭受稳健性不良,无法适应新的任务设置的问题,以要求刚性和不灵活的配置假设。来自集体智能的想法,特别是来自复杂系统,如自组织,紧急行为,群优化和蜂窝系统的复杂系统的概念倾向于产生鲁棒,适应性,并且对环境配置具有较小的刚性假设的解决方案。因此,很自然地看到这些想法纳入更新的深度学习方法。在这篇综述中,我们将提供神经网络研究的历史背景,即神经网络研究的复杂系统的参与,并突出了现代深度学习研究中的几个活跃区域,这些研究融合了集体智能的原则,以推进其当前能力。为了促进双向思想流动,我们还讨论了利用现代深度学习模型的工作,以帮助推进复杂的系统研究。我们希望这次审查可以作为复杂系统和深度学习社区之间的桥梁,以促进思想的交叉授粉和促进跨学科的新合作。
translated by 谷歌翻译
素描是一种常用于创新过程的自然和有效的视觉通信介质。深度学习模型的最新发展急剧改善了理解和生成视觉内容的机器能力。令人兴奋的发展领域探讨了用于模拟人类草图的深度学习方法,开设创造性应用的机会。本章介绍了开发深受学习驱动的创造性支持工具的三个基本步骤,这些步骤消耗和生成草图:1)在草图和移动用户界面之间生成新配对数据集的数据收集工作; 2)基于草图的用户界面检索系统,适用于最先进的计算机视觉技术; 3)一个对话的草图系统,支持基于自然语言的草图/批判创作过程的新颖互动。在本章中,我们在深度学习和人机互动社区中进行了对相关的事先工作,详细记录了数据收集过程和系统的架构,目前提供了定性和定量结果,并绘制了几个未来研究的景观在这个令人兴奋的地区的方向。
translated by 谷歌翻译
Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.
translated by 谷歌翻译
A generative recurrent neural network is quickly trained in an unsupervised manner to model popular reinforcement learning environments through compressed spatiotemporal representations. The world model's extracted features are fed into compact and simple policies trained by evolution, achieving state of the art results in various environments. We also train our agent entirely inside of an environment generated by its own internal world model, and transfer this policy back into the actual environment. Interactive version of paper: https://worldmodels.github.io 32nd Conference on Neural Information Processing Systems (NIPS 2018),
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译