蒙面的自动编码器是可扩展的视觉学习者,因为Mae \ Cite {He2022masked}的标题表明,视觉中的自我监督学习(SSL)可能会采用与NLP中类似的轨迹。具体而言,具有蒙版预测(例如BERT)的生成借口任务已成为NLP中的事实上的标准SSL实践。相比之下,他们的歧视性对应物(例如对比度学习)掩埋了视力中的生成方法的早期尝试;但是,蒙版图像建模的成功已恢复了屏蔽自动编码器(过去通常被称为DeNosing AutoCoder)。作为在NLP中与Bert弥合差距的一个里程碑,蒙面自动编码器吸引了对SSL在视觉及其他方面的前所未有的关注。这项工作对蒙面自动编码器进行了全面的调查,以洞悉SSL的有希望的方向。作为第一个使用蒙版自动编码器审查SSL的人,这项工作通过讨论其历史发展,最新进度以及对不同应用的影响,重点介绍其在视觉中的应用。
translated by 谷歌翻译
标记大量数据很昂贵。主动学习旨在通过要求注释未标记的集合中最有用的数据来解决这个问题。我们提出了一种新颖的活跃学习方法,该方法利用自我监督的借口任务和独特的数据采样器来选择既困难又具有代表性的数据。我们发现,简单的自我监督借口任务(例如旋转预测)的损失与下游任务损失密切相关。在主动学习迭代之前,对未标记的集合进行了借口任务学习者进行培训,并且未标记的数据被分类并通过其借口任务损失分组成批处理。在每个主动的学习迭代中,主要任务模型用于批评要注释的批次中最不确定的数据。我们评估了有关各种图像分类和分割基准测试的方法,并在CIFAR10,CALTECH-101,IMAGENET和CITYSCAPES上实现引人注目的性能。我们进一步表明,我们的方法在不平衡的数据集上表现良好,并且可以有效地解决冷启动问题的解决方案,在这种问题中,主动学习性能受到随机采样的初始标记集的影响。
translated by 谷歌翻译
对象接地任务旨在通过口头通信定位图像中的目标对象。了解人类命令是有效人体机器人通信所需的重要过程。然而,这是具有挑战性的,因为人类命令可能是暧昧和错误的。本文旨在消除人类的引用表达式,允许代理基于从场景图获得的语义数据提出相关问题。我们测试如果我们的代理可以从场景图之间使用对象之间的关系,以便询问可以消除原始用户命令的语义相关问题。在本文中,我们使用场景图(IGSG)提出增量接地,消歧模型使用从图像场景图和语言场景图到基于人类命令的地面对象的语义数据的歧义模型。与基线相比,IGSG显示了有希望的成果,在有多个相同的目标对象的复杂现实场景中。 IGSG可以通过要求消除歧义问题回到用户来有效消除歧义或错误的表达式。
translated by 谷歌翻译
本文研究了一个新的在线学习问题,其中包含双流式数据,其中数据流是通过不断发展的特征空间来描述的,新的功能出现了,旧功能逐渐消失。这个问题的挑战是两个折叠:1)随着时间的推移,数据样本不断流动,可能会随着时间的推移而随着时间的流逝而携带移动的模式,因此学习者可以随时更新。 2)很少的样本描述了新出现的特征,从而导致较弱的学习者倾向于做出错误预测。克服挑战的一个合理的想法是在前进的特征空间之间建立关系,以便在线学习者可以利用从旧功能中学到的知识来改善新功能的学习性能。不幸的是,这个想法并没有扩展到具有复杂功能相互作用的高维媒体流,这在善于跨性(偏见的浅学习者)和表现力(需要深度学习者)之间的权衡受到了折衷。在此激励的情况下,我们提出了一种新颖的旧^3S范式,其中发现了一个共享的潜在子空间来总结旧功能空间中的信息,从而构建了中间功能映射关系。旧^3S的关键特征是将模型容量视为可学习的语义,根据在线方式以输入数据流的复杂性和非线性,共同产生最佳模型深度和参数。理论分析和实证研究都证实了我们提议的生存能力和有效性。
translated by 谷歌翻译
由于极大数量的参数和评估标准和再现性,机器学习长期以来被视为黑盒子,用于预测燃烧化学动力学和缺乏评估标准和再现性。目前的工作旨在了解关于深度神经网络(DNN)方法的两个基本问题:DNN需要的数据以及DNN方法的一般数据。采样和预处理确定DNN训练数据集,进一步影响DNN预测能力。目前的工作建议使用Box-Cox转换(BCT)来预处理燃烧数据。此外,这项工作比较了在没有预处理的情况下进行了不同的采样方法,包括蒙特卡罗方法,歧管采样,生成神经网络方法(Cycle-GaN)和新提出的多尺度采样。我们的研究结果表明,通过歧管数据训练的DNN可以以有限的配置捕获化学动力学,但不能对扰动牢固,这对于与流场联系的DNN是不可避免的。蒙特卡罗和循环甘套采样可以覆盖更宽的相位空间,但不能捕获小规模的中间物种,产生差的预测结果。基于没有特定火焰仿真数据的多尺度方法的三层DNN,允许在各种场景中预测化学动力学并在时间的演变期间保持稳定。该单个DNN易于用几个CFD代码实现并在各种燃烧器中验证,包括(1)。零维自动化,(2)。一维自由传播火焰,(3)。具有三重火焰结构的二维喷射火焰,和(4)。三维湍流升降火焰。结果证明了预先训练的DNN的令人满意的准确性和泛化能力。 DNN和示例代码的FORTRAN和PYTHON版本在补充中附加了再现性。
translated by 谷歌翻译
这项工作试图提供一种合理的理论框架,旨在从数据压缩和歧视性代表的原则解释现代深度(卷积)网络。我们认为,对于高维多类数据,最佳线性判别表示最大化整个数据集之间的编码率差和所有子集的平均值。我们表明,用于优化速率降低目标的基本迭代梯度上升方案自然地导致了一个名为Redunet的多层深网络,其共享现代深度网络的共同特征。深度分层架构,线性和非线性操作员,甚至网络的甚至参数都通过正向传播明确地构造了逐层构造,尽管它们通过背部传播可用于微调。所获得的“白盒”网络的所有组件都具有精确的优化,统计和几何解释。此外,当我们强制执行分类时,所以,所以网络的所有线性运算符自然地变为多通道卷曲。不变设置中的推导表明稀疏性和不变性之间的折衷,并且还表明这种深度卷积网络在光谱域中构建和学习的显着更有效。我们的初步模拟和实验清楚地验证了速率降低目标和相关的Redunet的有效性。所有代码和数据都可用于\ url {https://github.com/ma-lab-berkeley}。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译