人类的感知可靠地识别3D场景的可移动和不可移动的部分,并通过不完整的观测来完成对象和背景的3D结构。我们不是通过标记的示例来学习此技能,而只是通过观察对象移动来学习。在这项工作中,我们提出了一种方法,该方法在训练时间观察未标记的多视图视频,并学会绘制对复杂场景的单个图像观察,例如带有汽车的街道,将其绘制为3D神经场景表示,该表演将其分解为可移动和可移动和不可移动的零件,同时合理地完成其3D结构。我们通过2D神经地面计划分别参数可移动和不可移动的场景部分。这些地面计划是与接地平面对齐的2D网格,可以将其局部解码为3D神经辐射场。我们的模型通过神经渲染受过训练的自我监督。我们证明,使用简单的启发式方法,例如提取对象以对象的3D表示,新颖的视图合成,实例段和3D边界框预测,预测,预测,诸如提取以对象为中心的3D表示,诸如提取街道规模的3D场景中的各种下游任务可以实现各种下游任务。强调其作为数据效率3D场景理解模型的骨干的价值。这种分离进一步通过对象操纵(例如删除,插入和刚体运动)进行了现场编辑。
translated by 谷歌翻译
我们呈现神经描述符字段(NDFS),对象表示,其通过类别级别描述符在对象和目标(例如用于悬挂的机器人夹具或用于悬挂的机架)之间进行编码和相对姿势。我们使用此表示进行对象操作,在这里,在给定任务演示时,我们要在同一类别中对新对象实例重复相同的任务。我们建议通过搜索(通过优化)来实现这一目标,为演示中观察到的描述符匹配的姿势。 NDFS通过不依赖于专家标记的关键点的3D自动编码任务,方便地以自我监督的方式培训。此外,NDFS是SE(3) - 保证在所有可能的3D对象翻译和旋转中推广的性能。我们展示了在仿真和真正的机器人上的少数(5-10)示范中的操纵任务的学习。我们的性能遍历两个对象实例和6-DOF对象姿势,并且显着优于最近依赖于2D描述符的基线。项目网站:https://yilundu.github.io/ndf/。
translated by 谷歌翻译
机器学习的最近进步已经创造了利用一类基于坐标的神经网络来解决视觉计算问题的兴趣,该基于坐标的神经网络在空间和时间跨空间和时间的场景或对象的物理属性。我们称之为神经领域的这些方法已经看到在3D形状和图像的合成中成功应用,人体的动画,3D重建和姿势估计。然而,由于在短时间内的快速进展,许多论文存在,但尚未出现全面的审查和制定问题。在本报告中,我们通过提供上下文,数学接地和对神经领域的文学进行广泛综述来解决这一限制。本报告涉及两种维度的研究。在第一部分中,我们通过识别神经字段方法的公共组件,包括不同的表示,架构,前向映射和泛化方法来专注于神经字段的技术。在第二部分中,我们专注于神经领域的应用在视觉计算中的不同问题,超越(例如,机器人,音频)。我们的评论显示了历史上和当前化身的视觉计算中已覆盖的主题的广度,展示了神经字段方法所带来的提高的质量,灵活性和能力。最后,我们展示了一个伴随着贡献本综述的生活版本,可以由社区不断更新。
translated by 谷歌翻译
深度神经网络已广泛用于学习数据集的潜在结构,跨图像,形状和音频信号等模态。然而,现有模型通常是依赖的方式,需要自定义架构和目标来处理不同类别的信号。我们利用神经字段以典型的方式捕获图像,形状,音频和跨模型视听域中的底层结构。我们将任务作为学习歧管之一,我们的目标是推断我们的数据所在的低维,本地线性子空间。通过实施歧管,局部线性和局部等距的覆盖范围,我们的模型 - 被称为宝石 - 学会捕获跨模式的数据集的基础结构。然后,我们可以沿着我们歧管的线性区域旅行,以获得样品之间的感知一致的插值,并且可以进一步使用GEM在我们的歧管上恢复点,而不是不同的输入图像的完成,而是音频或图像信号的跨模式幻觉。最后,我们表明,通过走过宝石的底层歧管,我们可能会在信号域中生成新的样本。代码和其他结果可在https://yilundu.github.io/gem/获得。
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
人类对我们周围的3D环境具有强烈直观的理解。我们大脑的物理学的心理模型适用于不同材料的物体,使我们能够执行远远超过当前机器人的范围的广泛操纵任务。在这项工作中,我们希望纯粹从2D视觉观测学习动态3D场景的模型。我们的模型将神经辐射字段(NERF)和时间对比学习与自动码框架相结合,这将学习ViewPoint-Invariant的3D感知场景表示。我们表明,通过学习的表示空间构造的动态模型使得能够控制涉及刚体和流体的挑战操纵任务,其中在不同于机器人操作的视点中指定目标。当与自动解码框架耦合时,它甚至可以从训练分布外的相机视点支持目标规范。我们进一步通过执行未来的预测和新颖观看综合来展示学习3D动态模型的丰富性。最后,我们提供了关于不同系统设计和对学习象征的定性分析的详细融合研究。
translated by 谷歌翻译
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions. Please see the project website for a video overview of the proposed method and all applications.
translated by 谷歌翻译
Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3Dstructure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3Dstructure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-toend from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model. 1
translated by 谷歌翻译
In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis. To this end, we propose DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry. At its core, our approach is based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying 3D scene structure. Our approach combines insights from 3D geometric computer vision with recent advances in learning image-to-image mappings based on adversarial loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction of the scene, using a 2D re-rendering loss and enforces perspective and multi-view geometry in a principled manner. We apply our persistent 3D scene representation to the problem of novel view synthesis demonstrating high-quality results for a variety of challenging scenes.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译