The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.
translated by 谷歌翻译
当人类掌握现实世界中的物体时,我们经常移动手臂将物体固定在可以使用它的不同姿势中。相比之下,典型的实验室设置仅研究举起后立即研究抓握的稳定性,而没有任何随后的臂重置。但是,由于重力扭矩和握力接触力可能会完全改变,因此抓紧稳定性可能会根据物体的固定姿势而差异很大。为了促进对持有姿势如何影响掌握稳定性的研究,我们提出了Poseit,这是一种新型的多模式数据集,其中包含从抓住对象的完整周期收集的视觉和触觉数据,将手臂重新放置到其中一个采样姿势,并将其重新放置到其中一个采样的姿势中,并摇动物体。使用Poseit的数据,我们可以制定和应对预测特定固定姿势是否稳定的抓握对象的任务。我们培训一个LSTM分类器,该分类器在拟议的任务上达到85%的准确性。我们的实验结果表明,接受Poseit训练的多模式模型比使用唯一视觉或触觉数据具有更高的精度,并且我们的分类器也可以推广到看不见的对象和姿势。
translated by 谷歌翻译
机器人仿真一直是数据驱动的操作任务的重要工具。但是,大多数现有的仿真框架都缺乏与触觉传感器的物理相互作用的高效和准确模型,也没有逼真的触觉模拟。这使得基于触觉的操纵任务的SIM转交付仍然具有挑战性。在这项工作中,我们通过建模接触物理学来整合机器人动力学和基于视觉的触觉传感器的模拟。该触点模型使用机器人最终效应器上的模拟接触力来告知逼真的触觉输出。为了消除SIM到真实传输差距,我们使用现实世界数据校准了机器人动力学,接触模型和触觉光学模拟器的物理模拟器,然后我们在零摄像机上演示了系统的有效性 - 真实掌握稳定性预测任务,在各种对象上,我们达到平均准确性为90.7%。实验揭示了将我们的模拟框架应用于更复杂的操纵任务的潜力。我们在https://github.com/cmurobotouch/taxim/tree/taxim-robot上开放仿真框架。
translated by 谷歌翻译
仿真广泛用于系统验证和大规模数据收集的机器人。然而,模拟传感器包括触觉传感器,这是一个长期存在的挑战。在本文中,我们提出了针对视觉触觉传感器的税法,逼真和高速仿真模型,Gelsight。凝胶传感器使用一块软弹性体作为接触的介质,并嵌入光学结构以捕获弹性体的变形,其在接触表面处施加的几何形状和力。我们提出了一种基于示例性的模拟eGelight方法:我们使用多项式查找表模拟对变形的光学响应。此表将变形几何形状映射到由嵌入式摄像机采样的像素强度。为了模拟由弹性体的表面拉伸引起的表面标记的运动,我们应用线性弹性变形理论和叠加原理。仿真模型校准,具有来自真实传感器的少于100个数据点。基于示例的方法使模型能够轻松地迁移到其他裸体传感器或其变化。据我们所知,我们的仿真框架是第一个包含从弹性体变形的标记运动场仿真以及光学仿真,创造了全面和计算的触觉模拟框架。实验表明,与现有工作相比,我们的光学仿真具有最低的像素 - 方面强度误差,并可以在线计算在线计算。我们的代码和补充材料在https://github.com/cmurobotouch/taxim开放。
translated by 谷歌翻译
可变形的物体操纵(DOM)是机器人中的新兴研究问题。操纵可变形对象的能力赋予具有更高自主权的机器人,并承诺在工业,服务和医疗领域中的新应用。然而,与刚性物体操纵相比,可变形物体的操纵相当复杂,并且仍然是开放的研究问题。解决DOM挑战在机器人学的几乎各个方面,即硬件设计,传感,(变形)建模,规划和控制的挑战突破。在本文中,我们审查了最近的进步,并在考虑每个子场中的变形时突出主要挑战。我们论文的特殊焦点在于讨论这些挑战并提出未来的研究方向。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译