Sign language recognition (SLR) aims to overcome the communication barrier for the people with deafness or the people with hard hearing. Most existing approaches can be typically divided into two lines, i.e., Skeleton-based and RGB-based methods, but both the two lines of methods have their limitations. RGB-based approaches usually overlook the fine-grained hand structure, while Skeleton-based methods do not take the facial expression into account. In attempts to address both limitations, we propose a new framework named Spatial-temporal Part-aware network (StepNet), based on RGB parts. As the name implies, StepNet consists of two modules: Part-level Spatial Modeling and Part-level Temporal Modeling. Particularly, without using any keypoint-level annotations, Part-level Spatial Modeling implicitly captures the appearance-based properties, such as hands and faces, in the feature space. On the other hand, Part-level Temporal Modeling captures the pertinent properties over time by implicitly mining the long-short term context. Extensive experiments show that our StepNet, thanks to Spatial-temporal modules, achieves competitive Top-1 Per-instance accuracy on three widely-used SLR benchmarks, i.e., 56.89% on WLASL, 77.2% on NMFs-CSL, and 77.1% on BOBSL. Moreover, the proposed method is compatible with the optical flow input, and can yield higher performance if fused. We hope that this work can serve as a preliminary step for the people with deafness.
translated by 谷歌翻译
We present the Group Propagation Vision Transformer (GPViT): a novel nonhierarchical (i.e. non-pyramidal) transformer model designed for general visual recognition with high-resolution features. High-resolution features (or tokens) are a natural fit for tasks that involve perceiving fine-grained details such as detection and segmentation, but exchanging global information between these features is expensive in memory and computation because of the way self-attention scales. We provide a highly efficient alternative Group Propagation Block (GP Block) to exchange global information. In each GP Block, features are first grouped together by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped features; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder. We evaluate GPViT on a variety of visual recognition tasks including image classification, semantic segmentation, object detection, and instance segmentation. Our method achieves significant performance gains over previous works across all tasks, especially on tasks that require high-resolution outputs, for example, our GPViT-L3 outperforms Swin Transformer-B by 2.0 mIoU on ADE20K semantic segmentation with only half as many parameters. Code and pre-trained models are available at https://github.com/ChenhongyiYang/GPViT .
translated by 谷歌翻译
This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings. We conduct extensive evaluations on 56 datasets from the BEIR and MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models with 40x more parameters.
translated by 谷歌翻译
As an effective method to deliver external materials into biological cells, microinjection has been widely applied in the biomedical field. However, the cognition of cell mechanical property is still inadequate, which greatly limits the efficiency and success rate of injection. Thus, a new rate-dependent mechanical model based on membrane theory is proposed for the first time. In this model, an analytical equilibrium equation between the injection force and cell deformation is established by considering the speed effect of microinjection. Different from the traditional membrane-theory-based model, the elastic coefficient of the constitutive material in the proposed model is modified as a function of the injection velocity and acceleration, effectively simulating the influence of speeds on the mechanical responses and providing a more generalized and practical model. Using this model, other mechanical responses at different speeds can be also accurately predicted, including the distribution of membrane tension and stress and the deformed shape. To verify the validity of the model, numerical simulations and experiments are carried out. The results show that the proposed model can match the real mechanical responses well at different injection speeds.
translated by 谷歌翻译
6D对象姿势估计是计算机视觉和机器人研究中的基本问题之一。尽管最近在同一类别内将姿势估计概括为新的对象实例(即类别级别的6D姿势估计)方面已做出了许多努力,但考虑到有限的带注释数据,它仍然在受限的环境中受到限制。在本文中,我们收集了Wild6D,这是一种具有不同实例和背景的新的未标记的RGBD对象视频数据集。我们利用这些数据在野外概括了类别级别的6D对象姿势效果,并通过半监督学习。我们提出了一个新模型,称为呈现姿势估计网络reponet,该模型使用带有合成数据的自由地面真实性共同训练,以及在现实世界数据上具有轮廓匹配的目标函数。在不使用实际数据上的任何3D注释的情况下,我们的方法优于先前数据集上的最先进方法,而我们的WILD6D测试集(带有手动注释进行评估)则优于较大的边距。带有WILD6D数据的项目页面:https://oasisyang.github.io/semi-pose。
translated by 谷歌翻译
大量的电子健康记录(EHR)在改善医疗保健方面产生了巨大的潜力。临床代码(结构化数据)和临床叙述(非结构化数据)是EHR中的两个重要文本模式。临床代码传达医院期间的诊断和治疗信息,临床注释带有患者遭遇的临床提供者的叙述。它们不孤立地存在,并且可以在大多数现实生活中的临床情况下相互补充。但是,大多数现有的面向EHR的研究要么集中于特定模式,要么以直接方式整合来自不同模态的数据,这忽略了它们之间的内在相互作用。为了解决这些问题,我们提出了一个名为MEDM-PLM的医学多模式预训练的语言模型,以了解对结构化和非结构化数据的增强EHR表示。在MEDM-PLM中,首先采用了两个基于变压器的神经网络组件来从每种模式中学习代表性特征。然后引入跨模块模块以建模其相互作用。我们在模拟III数据集上预先训练MEDM-PLM,并验证了该模型对三个下游临床任务的有效性,即药物建议,30天的再入院预测和ICD编码。与最先进的方法相比,广泛的实验证明了MEDM-PLM的功率。进一步的分析和可视化表明了我们的模型的鲁棒性,这有可能为临床决策提供更全面的解释。
translated by 谷歌翻译
我们提出了一种新颖的场景表示,其编码达到距离 - 沿着可行轨迹的场景中的任何位置之间的距离。我们证明,该环境现场表示可以直接指导2D迷宫或3D室内场景中代理的动态行为。我们的环境领域是一种连续表示,通过使用离散采样的培训数据通过神经隐式功能学习。我们展示其在2D迷宫中的代理导航应用,3D室内环境中的人为轨迹预测。为了为人类生产物理似品和自然的轨迹,我们还学习了一种生成模型,该模型预测了人类通常出现的区域,并强制执行要在这些区域内定义的环境场。广泛的实验表明,所提出的方法可以有效准确地产生可行和合理的轨迹。
translated by 谷歌翻译
在复杂环境中开发针对四足动物的强大视觉引导控制器,具有各种障碍,动力环境和不平坦的地形,这是非常具有挑战性的。尽管增强学习(RL)为敏捷的运动技能提供了有希望的范式,并在模拟中提供了视觉投入,但在现实世界中将RL政策部署仍然非常具有挑战性。我们的关键见解是,除了域间隙的差异,模拟和现实世界之间的视觉外观外,控制管道的延迟也是困难的主要原因。在本文中,我们建议在训练RL代理时解决此问题。具体而言,我们通过使用过去的观测值模拟真实硬件的延迟,并以随机时期进行采样,以进行本体感受和视觉。我们在没有任何预定义的控制器或参考运动的情况下训练RL策略在物理模拟器中以端到端的控制,并将其直接部署在野外运行的真实A1四倍的机器人上。我们在具有复杂地形和障碍的不同室外环境中评估我们的方法。我们证明机器人可以高速操纵,避免障碍物,并在基准方面显示出显着改善。我们的带有视频的项目页面位于https://mehooz.github.io/mmdr-wild/。
translated by 谷歌翻译
虽然对理解计算机视觉中的手对象交互进行了重大进展,但机器人执行复杂的灵巧操纵仍然非常具有挑战性。在本文中,我们提出了一种新的平台和管道DEXMV(来自视频的Dexerous操纵)以进行模仿学习。我们设计了一个平台:(i)具有多指机器人手和(ii)计算机视觉系统的复杂灵巧操纵任务的仿真系统,以记录进行相同任务的人类手的大规模示范。在我们的小说管道中,我们从视频中提取3D手和对象姿势,并提出了一种新颖的演示翻译方法,将人类运动转换为机器人示范。然后,我们将多个仿制学习算法与演示进行应用。我们表明,示威活动确实可以通过大幅度提高机器人学习,并解决独自增强学习无法解决的复杂任务。具有视频的项目页面:https://yzqin.github.io/dexmv
translated by 谷歌翻译
由于其对环境变化的鲁棒性,视觉猛感的间接方法是受欢迎的。 ORB-SLAM2 \ CITE {ORBSLM2}是该域中的基准方法,但是,除非选择帧作为关键帧,否则它会消耗从未被重用的描述符。轻量级和高效,因为它跟踪相邻帧之间的关键点而不计算描述符。为此,基于稀疏光流提出了一种两个级粗到微小描述符独立的Keypoint匹配方法。在第一阶段,我们通过简单但有效的运动模型预测初始关键点对应,然后通过基于金字塔的稀疏光流跟踪鲁棒地建立了对应关系。在第二阶段,我们利用运动平滑度和末端几何形状的约束来改进对应关系。特别是,我们的方法仅计算关键帧的描述符。我们在\ texit {tum}和\ texit {icl-nuim} RGB-D数据集上测试Fastorb-Slam,并将其准确性和效率与九种现有的RGB-D SLAM方法进行比较。定性和定量结果表明,我们的方法实现了最先进的准确性,并且大约是ORB-SLAM2的两倍。
translated by 谷歌翻译