We introduce ViewNeRF, a Neural Radiance Field-based viewpoint estimation method that learns to predict category-level viewpoints directly from images during training. While NeRF is usually trained with ground-truth camera poses, multiple extensions have been proposed to reduce the need for this expensive supervision. Nonetheless, most of these methods still struggle in complex settings with large camera movements, and are restricted to single scenes, i.e. they cannot be trained on a collection of scenes depicting the same object category. To address these issues, our method uses an analysis by synthesis approach, combining a conditional NeRF with a viewpoint predictor and a scene encoder in order to produce self-supervised reconstructions for whole object categories. Rather than focusing on high fidelity reconstruction, we target efficient and accurate viewpoint prediction in complex scenarios, e.g. 360{\deg} rotation on real data. Our model shows competitive results on synthetic and real datasets, both for single scenes and multi-instance collections.
translated by 谷歌翻译
Understanding the 3D world without supervision is currently a major challenge in computer vision as the annotations required to supervise deep networks for tasks in this domain are expensive to obtain on a large scale. In this paper, we address the problem of unsupervised viewpoint estimation. We formulate this as a self-supervised learning task, where image reconstruction provides the supervision needed to predict the camera viewpoint. Specifically, we make use of pairs of images of the same object at training time, from unknown viewpoints, to self-supervise training by combining the viewpoint information from one image with the appearance information from the other. We demonstrate that using a perspective spatial transformer allows efficient viewpoint learning, outperforming existing unsupervised approaches on synthetic data, and obtains competitive results on the challenging PASCAL3D+ dataset.
translated by 谷歌翻译
我们提出了一个新的基准数据集,即Sapsucker Woods 60(SSW60),用于推进视听细颗粒分类的研究。尽管我们的社区在图像上的细粒度视觉分类方面取得了长足的进步,但音频和视频细颗粒分类的对应物相对尚未探索。为了鼓励在这个领域的进步,我们已经仔细构建了SSW60数据集,以使研究人员能够以三种不同的方式对相同的类别进行分类:图像,音频和视频。该数据集涵盖了60种鸟类,由现有数据集以及全新的专家策划音频和视频数据集组成。我们通过使用最先进的变压器方法进行了彻底基准的视听分类性能和模态融合实验。我们的发现表明,视听融合方法的性能要比仅使用基于图像或音频的方法来进行视频分类任务要好。我们还提出了有趣的模态转移实验,这是由SSW60的独特构造所涵盖的三种不同模态所实现的。我们希望SSW60数据集和伴随的基线在这个迷人的地区进行研究。
translated by 谷歌翻译
弱监督的对象本地化(WSOL)旨在学习仅使用图像级类别标签编码对象位置的表示形式。但是,许多物体可以在不同水平的粒度标记。它是动物,鸟还是大角的猫头鹰?我们应该使用哪些图像级标签?在本文中,我们研究了标签粒度在WSOL中的作用。为了促进这项调查,我们引入了Inatloc500,这是一个新的用于WSOL的大规模细粒基准数据集。令人惊讶的是,我们发现选择正确的训练标签粒度比选择最佳的WSOL算法提供了更大的性能。我们还表明,更改标签粒度可以显着提高数据效率。
translated by 谷歌翻译
每年,成千上万的人学习新的视觉分类任务 - 放射科医生学习识别肿瘤,观鸟者学会区分相似的物种,并且人群工人学习如何为自主驾驶等应用提供有价值的数据。随着人类的了解,他们的大脑会更新其提取和关注的视觉功能,最终为他们的最终分类决策提供了信息。在这项工作中,我们提出了一项新的任务,即追踪人类学习者从事挑战性视觉分类任务的不断发展的分类行为。我们提出的模型可以共同提取学习者使用的视觉特征,并预测其使用的分类功能。我们从真正的人类学习者那里收集了三个挑战性的新数据集,以评估不同的视觉知识追踪方法的性能。我们的结果表明,我们的经常性模型能够预测人类学习者对三个具有挑战性的医学形象和物种识别任务的分类行为。
translated by 谷歌翻译
我们通过无监督学习的角度探索语义对应估计。我们使用标准化的评估协议彻底评估了最近提出的几种跨多个挑战数据集的无监督方法,在该协议中,我们会改变诸如骨干架构,预训练策略以及预训练和填充数据集等因素。为了更好地了解这些方法的故障模式,并为了提供更清晰的改进途径,我们提供了一个新的诊断框架以及一个新的性能指标,该指标更适合于语义匹配任务。最后,我们引入了一种新的无监督的对应方法,该方法利用了预训练的功能的强度,同时鼓励在训练过程中进行更好的比赛。与当前的最新方法相比,这会导致匹配性能明显更好。
translated by 谷歌翻译
细粒度的图像分析(FGIA)是计算机视觉和模式识别中的长期和基本问题,并为一组多种现实世界应用提供了基础。 FGIA的任务是从属类别分析视觉物体,例如汽车或汽车型号的种类。细粒度分析中固有的小阶级和阶级阶级内变异使其成为一个具有挑战性的问题。利用深度学习的进步,近年来,我们在深入学习动力的FGIA中见证了显着进展。在本文中,我们对这些进展的系统进行了系统的调查,我们试图通过巩固两个基本的细粒度研究领域 - 细粒度的图像识别和细粒度的图像检索来重新定义和扩大FGIA领域。此外,我们还审查了FGIA的其他关键问题,例如公开可用的基准数据集和相关域的特定于应用程序。我们通过突出几个研究方向和开放问题,从社区中突出了几个研究方向和开放问题。
translated by 谷歌翻译
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.
translated by 谷歌翻译
Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current nonensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.
translated by 谷歌翻译
Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage.We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Exploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.
translated by 谷歌翻译