Many visual recognition models are evaluated only on their classification accuracy, a metric for which they obtain strong performance. In this paper, we investigate whether computer vision models can also provide correct rationales for their predictions. We propose a ``doubly right'' object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales. We find that state-of-the-art visual models, such as CLIP, often provide incorrect rationales for their categorical predictions. However, by transferring the rationales from language models into visual representations through a tailored dataset, we show that we can learn a ``why prompt,'' which adapts large visual representations to produce correct rationales. Visualizations and empirical experiments show that our prompts significantly improve performance on doubly right object recognition, in addition to zero-shot transfer to unseen tasks and datasets.
translated by 谷歌翻译
Incidental supervision from language has become a popular approach for learning generic visual representations that can be prompted to perform many recognition tasks in computer vision. We conduct an in-depth exploration of the CLIP model and show that its visual representation is often strongly biased towards solving some tasks more than others. Moreover, which task the representation will be biased towards is unpredictable, with little consistency across images. To resolve this task bias, we show how to learn a visual prompt that guides the representation towards features relevant to their task of interest. Our results show that these visual prompts can be independent of the input image and still effectively provide a conditioning mechanism to steer visual representations towards the desired task.
translated by 谷歌翻译
Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline.
translated by 谷歌翻译
变异自动编码器(VAE)遭受后塌陷的苦难,其中用于建模和推理的强大神经网络在没有有意义使用潜在表示的情况下优化了目标。我们引入了推理评论家,通过需要潜在变量和观测值之间的对应关系来检测和激励后塌陷。通过将批评家的目标与自我监督的对比表示学习中的文献联系起来,我们从理论和经验上展示了优化推论批评家在观察和潜伏期之间增加相互信息,从而减轻后验崩溃。这种方法可以直接实施,并且需要比以前的方法要少得多的培训时间,但在三个已建立的数据集中获得了竞争结果。总体而言,该方法奠定了基础,以弥合先前与各种自动编码器的对比度学习和概率建模的框架,从而强调了两个社区在其交叉点上可能会发现的好处。
translated by 谷歌翻译
3D重建是计算机视觉中的一个基本问题,当重建对象被部分或完全遮住时,任务尤其具有挑战性。我们介绍了一种使用未观察到的对象施放的阴影的方法,以推断遮挡背后的3D卷。我们创建一个可区分的图像形成模型,使我们能够共同推断物体的3D形状,其姿势和光源的位置。由于该方法是端到端可区分的,因此我们能够集成对象几何学的学习先验,以生成不同对象类别的现实3D形状。实验和可视化表明该方法能够生成与阴影观察一致的多个可能的解决方案。即使光源和物体姿势的位置都未知,我们的方法也起作用。我们的方法对现实世界的图像也很强,而地面真实的阴影面罩未知。
translated by 谷歌翻译
The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require training on databases of LR-HR image pairs for supervised learning). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee that our outputs are realistic. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show extensive experimental results demonstrating the efficacy of our approach in the domain of face super-resolution (also known as face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously pos-sible.
translated by 谷歌翻译
A hallmark of human intelligence is the ability to learn new concepts purely from language. Several recent approaches have explored training machine learning models via natural language supervision. However, these approaches fall short in leveraging linguistic quantifiers (such as 'always' or 'rarely') and mimicking humans in compositionally learning complex tasks. Here, we present LaSQuE, a method that can learn zero-shot classifiers from language explanations by using three new strategies - (1) modeling the semantics of linguistic quantifiers in explanations (including exploiting ordinal strength relationships, such as 'always' > 'likely'), (2) aggregating information from multiple explanations using an attention-based mechanism, and (3) model training via curriculum learning. With these strategies, LaSQuE outperforms prior work, showing an absolute gain of up to 7% in generalizing to unseen real-world classification tasks.
translated by 谷歌翻译
Autonomous driving requires efficient reasoning about the location and appearance of the different agents in the scene, which aids in downstream tasks such as object detection, object tracking, and path planning. The past few years have witnessed a surge in approaches that combine the different taskbased modules of the classic self-driving stack into an End-toEnd(E2E) trainable learning system. These approaches replace perception, prediction, and sensor fusion modules with a single contiguous module with shared latent space embedding, from which one extracts a human-interpretable representation of the scene. One of the most popular representations is the Birds-eye View (BEV), which expresses the location of different traffic participants in the ego vehicle frame from a top-down view. However, a BEV does not capture the chromatic appearance information of the participants. To overcome this limitation, we propose a novel representation that captures various traffic participants appearance and occupancy information from an array of monocular cameras covering 360 deg field of view (FOV). We use a learned image embedding of all camera images to generate a BEV of the scene at any instant that captures both appearance and occupancy of the scene, which can aid in downstream tasks such as object tracking and executing language-based commands. We test the efficacy of our approach on synthetic dataset generated from CARLA. The code, data set, and results can be found at https://rebrand.ly/APP OCC-results.
translated by 谷歌翻译
我们提出DIY-IPS - 自己动手 - 室内定位系统,这是一个开源实时室内定位移动应用程序。DIY-IPS通过使用可用WiFi接入点的双波段RSSI指纹识别来检测用户的室内位置。该应用程序可以无需额外的基础设施费用即可实时检测用户的室内位置。我们发布了我们的应用程序作为开源,以节省其他研究人员的时间来重新创建它。该应用程序使研究人员/用户能够(1)使用地面真相标签收集室内定位数据集,(2)以更高的准确性或其他研究目的自定义应用程序(3)通过用地面真相实时测试测试修改方法的准确性。我们进行了初步实验,以证明应用程序的有效性。
translated by 谷歌翻译
在这项工作中,我们提出了一个端到端双耳语音合成系统,该系统将低抑制音频编解码器与强大的双耳解码器结合在一起,该解码器能够准确地进行语音双耳化,同时忠实地重建环境因素,例如环境噪声或混响。该网络是经过修改的矢量定量变异自动编码器,经过训练,采用了几个精心设计的目标,包括对抗性损失。我们在具有客观指标和感知研究的内部双耳数据集上评估了所提出的系统。结果表明,所提出的方法比以前的方法更接近地面真相数据。特别是,我们证明了对抗性损失在捕获创建真实听觉场景所需的环境效果中的能力。
translated by 谷歌翻译