在计算机视觉中,对现实世界图像的自我监督,类别不足的分割是一个具有挑战性的开放问题。在这里,我们通过基于Spelke对象的认知科学概念来展示如何从运动自学学习中学习静态分组先验:一组可以一起移动的物理内容。我们介绍了兴奋性抑制段提取网络(EISEN),该网络学会从基于运动的训练信号中提取成对的亲和力图,以供静态场景。然后,艾森使用新颖的图形传播和竞争网络从亲和力产生细分市场。在训练过程中,进行相关运动的对象(例如机器人臂和移动的对象)被引导过程解耦:Eisen解释了它已经学会了细分的对象的运动。我们表明,艾森(Eisen)在挑战合成和现实世界的机器人数据集上进行了自我监督的图像分割方面取得了重大改进。
translated by 谷歌翻译
尽管当前的视觉算法在许多具有挑战性的任务上都表现出色,但尚不清楚他们如何理解现实世界环境的物理动态。在这里,我们介绍了Physion,一种数据集和基准,用于严格评估预测物理场景如何随着时间而发展的能力。我们的数据集具有对各种物理现象的现实模拟,包括刚性和软体体碰撞,稳定的多对象配置,滚动,滑动和弹丸运动,因此比以前的基准提供了更全面的挑战。我们使用Physion来基准一套模型,其体系结构,学习目标,投入输出结构和培训数据各不相同。同时,我们在同一场景上获得了人类预测行为的精确测量,从而使我们能够直接评估任何模型能够近似人类行为的效果。我们发现,学习以对象为中心的表示的视觉算法通常优于那些没有人的表现,但仍未达到人类绩效。另一方面,绘制具有直接访问物理状态信息的神经网络的表现效果更好,并且做出与人类制作的预测更相似。这些结果表明,提取场景的物理表征是在视力算法中实现人类水平和类似人类的物理理解的主要瓶颈。我们已公开发布了所有数据和代码,以促进使用物理以完全可重现的方式对其他模型进行基准测试,从而使对视觉算法的进度进行系统的评估,这些算法像人们一样坚固地了解物理环境。
translated by 谷歌翻译
我们介绍了ThreedWorld(TDW),是交互式多模态物理模拟的平台。 TDW能够模拟高保真感官数据和富裕的3D环境中的移动代理和对象之间的物理交互。独特的属性包括:实时近光 - 真实图像渲染;对象和环境库,以及他们定制的例程;有效构建新环境课程的生成程序;高保真音频渲染;各种材料类型的现实物理相互作用,包括布料,液体和可变形物体;可定制的代理体现AI代理商;并支持与VR设备的人类交互。 TDW的API使多个代理能够在模拟中进行交互,并返回一系列表示世界状态的传感器和物理数据。我们在计算机视觉,机器学习和认知科学中的新兴的研究方向上提供了通过TDW的初始实验,包括多模态物理场景理解,物理动态预测,多代理交互,像孩子一样学习的模型,并注意研究人类和神经网络。
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy "surrogate" algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques.
translated by 谷歌翻译
We prove that for $c>0$ a sufficiently small universal constant that a random set of $c d^2/\log^4(d)$ independent Gaussian random points in $\mathbb{R}^d$ lie on a common ellipsoid with high probability. This nearly establishes a conjecture of~\cite{SaundersonCPW12}, within logarithmic factors. The latter conjecture has attracted significant attention over the past decade, due to its connections to machine learning and sum-of-squares lower bounds for certain statistical problems.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Machine learning (ML) has found broad applicability in quantum information science in topics as diverse as experimental design, state classification, and even studies on quantum foundations. Here, we experimentally realize an approach for defining custom prior distributions that are automatically tuned using ML for use with Bayesian quantum state estimation methods. Previously, researchers have looked to Bayesian quantum state tomography due to its unique advantages like natural uncertainty quantification, the return of reliable estimates under any measurement condition, and minimal mean-squared error. However, practical challenges related to long computation times and conceptual issues concerning how to incorporate prior knowledge most suitably can overshadow these benefits. Using both simulated and experimental measurement results, we demonstrate that ML-defined prior distributions reduce net convergence times and provide a natural way to incorporate both implicit and explicit information directly into the prior distribution. These results constitute a promising path toward practical implementations of Bayesian quantum state tomography.
translated by 谷歌翻译
The Forster transform is a method of regularizing a dataset by placing it in {\em radial isotropic position} while maintaining some of its essential properties. Forster transforms have played a key role in a diverse range of settings spanning computer science and functional analysis. Prior work had given {\em weakly} polynomial time algorithms for computing Forster transforms, when they exist. Our main result is the first {\em strongly polynomial time} algorithm to compute an approximate Forster transform of a given dataset or certify that no such transformation exists. By leveraging our strongly polynomial Forster algorithm, we obtain the first strongly polynomial time algorithm for {\em distribution-free} PAC learning of halfspaces. This learning result is surprising because {\em proper} PAC learning of halfspaces is {\em equivalent} to linear programming. Our learning approach extends to give a strongly polynomial halfspace learner in the presence of random classification noise and, more generally, Massart noise.
translated by 谷歌翻译
In this paper, we explore the use of metric learning to embed Windows PE files in a low-dimensional vector space for downstream use in a variety of applications, including malware detection, family classification, and malware attribute tagging. Specifically, we enrich labeling on malicious and benign PE files using computationally expensive, disassembly-based malicious capabilities. Using these capabilities, we derive several different types of metric embeddings utilizing an embedding neural network trained via contrastive loss, Spearman rank correlation, and combinations thereof. We then examine performance on a variety of transfer tasks performed on the EMBER and SOREL datasets, demonstrating that for several tasks, low-dimensional, computationally efficient metric embeddings maintain performance with little decay, which offers the potential to quickly retrain for a variety of transfer tasks at significantly reduced storage overhead. We conclude with an examination of practical considerations for the use of our proposed embedding approach, such as robustness to adversarial evasion and introduction of task-specific auxiliary objectives to improve performance on mission critical tasks.
translated by 谷歌翻译