人类机器人互动(HRI)的研究旨在建立人与机器人之间的紧密而友好的沟通。在以人为中心的HRI中,实施成功有效的HRI的一个重要方面是建立自然而直观的互动,包括口头和非语言。作为一种普遍的非言语沟通方法,在我们的日常生活中,手势和手臂手势沟通无处不在。基于手势的HRI的大量工作散布在各种研究领域。但是,仍然缺乏对基于手势的HRI作品的系统理解。本文旨在对基于手势的HRI进行全面审查,并专注于该领域的高级发现。遵循刺激和生物反应框架,该综述包括:(i)产生人类手势(刺激)。 (ii)机器人识别人类手势(有机体)。 (iii)机器人对人手势的反应(反应)。此外,本综述总结了框架中每个元素的研究状态,并分析相关工作的优势和缺点。在最后一部分中,本文讨论了有关基于手势的HRI的当前研究挑战,并提供了未来的方向。
translated by 谷歌翻译
持续的学习旨在不断学习多个传入的新任务,并将学习任务的绩效保持一致。但是,现有的关于持续学习的研究假设对象的姿势是预先定义和良好的。对于实际应用,这项工作着重于姿势不合时宜的持续学习任务,在该任务中,对象的姿势动态和不可预测地变化。从过去的方法中采用的点云增加将随着连续学习过程中的任务增加而急剧上升。为了解决这个问题,我们将模棱两可作为额外的先验知识注入网络。我们提出了一个新颖的持续学习模型,该模型有效地提炼了先前任务的几何模棱两可信息。该实验表明,我们的方法克服了几个主流点云数据集中姿势无关方案的挑战。我们进一步进行消融研究,以评估方法的每个组成部分的验证。
translated by 谷歌翻译
自我关注在捕获远程关系时,在提高视觉任务的表现,例如图像分类和图像标题等方面,突出的能力。然而,自我关注模块高度依赖于查询键值特征之间的点产品乘法和维度对齐,这导致两个问题:(1)点产品乘法导致穷举和冗余计算。 (2)由于视觉特征图通常出现作为多维张量,重塑张量特征的尺度,以适应尺寸对齐可能会破坏张量特征图的内部结构。为了解决这些问题,本文提出了一种具有其变体的自我关注插入模块,即合成张量变换(STT),用于直接处理图像张量特征。如果在查询键值之间计算点 - 产品乘法,则基本STT由张量转换组成,以从视觉信息中学习合成注意力。 STT系列的有效性在图像分类和图像标题上验证。实验表明,建议的STT实现了竞争性能,同时保持鲁棒性与基于视觉任务的自我关注相比。
translated by 谷歌翻译
随着自我关注机制的发展,变压器模型已经在计算机视觉域中展示了其出色的性能。然而,从完全关注机制带来的大规模计算成为内存消耗的沉重负担。顺序地,记忆的限制降低了改善变压器模型的可能性。为了解决这个问题,我们提出了一种名为耦合器的新的记忆经济性注意力机制,它将注意力映射与两个子矩阵分成并从空间信息中生成对准分数。应用了一系列不同的尺度图像分类任务来评估模型的有效性。实验结果表明,在ImageNet-1K分类任务上,与常规变压器相比,耦合器可以显着降低28%的存储器消耗,同时访问足够的精度要求,并且在占用相同的内存占用时表达了0.92%。结果,耦合器可以用作视觉任务中的有效骨干,并提供关于研究人员注意机制的新颖视角。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译