我们介绍了一种用于将一个自然形象的视觉外观转移到另一个自然形象的方法。具体地,我们的目标是生成一个图像,其中源结构图像中的对象是“绘制”的目标外观图像中的语义相关对象的视觉外观。我们的方法通过训练一个单个结构/外观映像对给出一个发电机作为输入。将语义信息集成到我们的框架中 - 解决此任务的关键组件 - 我们的主要思想是利用作为外部语义的预训练和固定视觉变压器(VIT)模型。具体而言,我们从深毒性特征中提取的结构和外观的新颖表示,从学习的自我关注模块中解开它们。然后,我们建立一个客观函数,即接头所需的结构和外观表示,在vit特征的空间中相互交互。我们术语“拼接”的框架不涉及对抗性培训,也不需要任何额外的输入信息,例如语义分割或通信,并且可以产生高分辨率结果,例如,在高清中工作。我们在物体数量,姿势和外观的显着变化下,我们展示了各种内野图像对的高质量结果。
translated by 谷歌翻译
我们利用从预先训练的视觉变压器(VIT)提取的深度特征,如密集的视觉描述符。我们证明这些特征是当从自我监督的Vit模型(Dino-Vit)中提取时,表现出几种打击性质:(i)特征在高空间分辨率下编码强大的高级信息 - 即,捕获精细的语义对象部件空间粒度和(ii)编码的语义信息跨相关但不同的对象类别(即超级类别)共享。这些属性允许我们设计强大的密集Vit描述符,便于各种应用,包括共分割,部分共分割和通信 - 通过将轻量级方法应用于深度染色特征(例如,分布/聚类)来实现。我们将这些应用程序进一步接受级别任务的领域 - 展示相关类别的对象如何在显着的姿势和外观变化下常规分段为语义部分。我们的方法,在定性和定量地评估的方法,实现最先进的部分共分割结果,以及最近监督方法的竞争结果,专门针对共同分割和对应关系。
translated by 谷歌翻译
GAN能够进行一代视频培训的生成和操纵任务。然而,这些单一视频GAN需要不合理的时间来训练单个视频,使它们几乎不切实际。在本文中,我们提出了从单个视频发电的GaN的必要性,并为各种生成和操纵任务引入非参数基准。我们恢复古典时空补丁 - 最近的邻居接近并使其适应可扩展的无条件生成模型,而无需任何学习。这种简单的基线令人惊讶地优于视觉质量和现实主义(通过定量和定性评估确认)的单视频导航,并且不成比例地更快(运行时从几天减少到秒)。除了不同的视频生成之外,我们使用相同的框架展示了其他应用程序,包括视频类比和时空复回靶向。我们所提出的方法很容易缩放到全高清视频。这些观察结果表明,古典方法(如果正确调整),这些任务的大幅优于重度深度学习机械。这为单视频生成和操作任务设置了新的基线,并且不太重要 - 首次从单个视频中从单个视频中产生多样化。
translated by 谷歌翻译
We wish to automatically predict the "speediness" of moving objects in videos-whether they move faster, at, or slower than their "natural" speed. The core component in our approach is SpeedNet-a novel deep network trained to detect if a video is playing at normal rate, or if it is sped up. SpeedNet is trained on a large corpus of natural videos in a self-supervised manner, without requiring any manual annotations. We show how this single, binary classification network can be used to detect arbitrary rates of speediness of objects. We demonstrate prediction results by Speed-Net on a wide range of videos containing complex natural motions, and examine the visual cues it utilizes for making those predictions. Importantly, we show that through predicting the speed of videos, the model learns a powerful and meaningful space-time representation that goes beyond simple motion cues. We demonstrate how those learned features can boost the performance of self-supervised action recognition, and can be used for video retrieval. Furthermore, we also apply SpeedNet for generating time-varying, adaptive video speedups, which can allow viewers to watch videos faster, but with less of the jittery, unnatural motions typical to videos that are sped up uniformly.
translated by 谷歌翻译
Random samples from a single image Single training image Figure 1: Image generation learned from a single training image. We propose SinGAN-a new unconditional generative model trained on a single natural image. Our model learns the image's patch statistics across multiple scales, using a dedicated multi-scale adversarial training scheme; it can then be used to generate new realistic image samples that preserve the original patch distribution while creating new object configurations and structures.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
由于长期没有事件,处理动态数据时,陈旧问题是一个众所周知的问题。由于仅当节点参与事件时才更新节点的内存,因此其内存变为陈旧。通常,它是指缺乏社会帐户的时间停用等事件。为了克服内存的陈旧问题问题,除节点内存外,还来自节点邻居内存的信息。受此启发的启发,我们设计了一个更新的嵌入模块,该模块除节点邻居外还插入最相似的节点。我们的方法获得了与TGN相似的结果,并略有改进。这可能表明在微调我们的超参数后,尤其是时间阈值并使用可学习的相似度度量后,可能会有所改善。
translated by 谷歌翻译
动力系统的演变通常由非线性偏微分方程(PDE)控制,在模拟框架中,其解决方案需要大量的计算资源。在这项工作中,我们提出了一种新颖的方法,该方法将超网络求解器与傅立叶神经操作员体系结构相结合。我们的方法分别处理时间和空间。结果,它通过采用部分差分运算符的一般组成特性,成功地在连续时间步骤中成功传播了初始条件。在先前的工作之后,在特定时间点提供监督。我们在各个时间演化PDE上测试我们的方法,包括一个,两个和三个空间维度中的非线性流体流。结果表明,新方法在监督点的时间点提高了学习准确性,并能够插入和解决任何中间时间的解决方案。
translated by 谷歌翻译
与人类类似,动物的面部表情与情绪状态紧密相关。但是,与人类领域相反,动物面部表情对情绪状态的自动识别是没有充满反应的,这主要是由于数据收集和建立地面真相的困难,涉及非语言用户的情绪状态。我们将最近的深度学习技术应用于在受控的实验环境中收集的数据集上对狗的挫败进行分类和(负面)的挫败感。我们探索在此任务的不同监督下不同骨干(例如,重新连接,VIT)的适用性,并发现自我监督的预定的VIT(DINO-VIT)的特征优于其他替代方案。据我们所知,这项工作是第一个解决对受控实验中获得的数据自动分类的任务。
translated by 谷歌翻译
Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry. Until recently, deep active learning methods were ineffectual in the low-budget regime, where only a small number of examples are annotated. The situation has been alleviated by recent advances in representation and self-supervised learning, which impart the geometry of the data representation with rich information about the points. Taking advantage of this progress, we study the problem of subset selection for annotation through a "covering" lens, proposing ProbCover - a new active learning algorithm for the low budget regime, which seeks to maximize Probability Coverage. We then describe a dual way to view the proposed formulation, from which one can derive strategies suitable for the high budget regime of active learning, related to existing methods like Coreset. We conclude with extensive experiments, evaluating ProbCover in the low-budget regime. We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks. This method is especially beneficial in the semi-supervised setting, allowing state-of-the-art semi-supervised methods to match the performance of fully supervised methods, while using much fewer labels nonetheless. Code is available at https://github.com/avihu111/TypiClust.
translated by 谷歌翻译