我们提出了一个基于神经网络的系统,用于长期,多动能人类运动合成。该系统被称为神经木偶,可以从简单的用户输入中平稳过渡,包括带有预期动作持续时间的动作标签,以及如果用户指定的话,则可以产生高质量和有意义的动作。我们系统的核心是一种基于变压器的新型运动生成模型,即Marionet,它可以在给定的动作标签给定不同的动作。与现有运动生成模型不同,Marionet利用了过去的运动剪辑和未来动作标签的上下文信息,专门用于生成可以平稳融合历史和未来动作的动作。具体而言,Marionet首先将目标动作标签和上下文信息编码为动作级潜在代码。该代码通过时间展开模块将代码展开为帧级控制信号,然后可以将其与其他帧级控制信号(如目标轨迹)结合使用。然后以自动回归方式生成运动帧。通过依次应用木偶,系统神经木偶可以借助两个简单的方案(即“影子开始”和“动作修订”)来稳健地产生长期的多动作运动。与新型系统一起,我们还提供了一个专门针对多动运动综合任务的新数据集,其中包含动作标签及其上下文信息。进行了广泛的实验,以研究我们系统产生的动作的动作准确性,自然主义和过渡平滑度。
translated by 谷歌翻译
线云虽然在先前的工作中受到评价不足,但与从多视图图像中提取的点云相比,可能对建筑物的结构信息进行了更紧凑的结构信息。在这项工作中,我们建议第一个处理用于构建线框抽象的线云的网络。该网络将线云作为输入,即从多视图图像提取的3D线段的非结构和无序集,并输出基础建筑物的3D线框,该建筑物由稀疏的3D连接组组成,由线段连接, 。我们观察到一个线斑块,即一组相邻的线段,编码足够的轮廓信息,以预测潜在连接的存在甚至3D位置,以及两个查询连接之间的连通性的可能性。因此,我们引入了两层线斑变压器,以从采样线贴片中提取连接和连接性,以形成3D构建线框模型。我们还介绍了带有地面3D线框的多视图图像的合成数据集。我们广泛证明,在多个基线建筑重建方法上,我们的重建3D线框模型可显着改善。
translated by 谷歌翻译
从\ emph {nocedended}点云中重建3D几何形状可以使许多下游任务受益。最近的方法主要采用神经网络的神经形状表示,以代表签名的距离字段,并通过无签名的监督适应点云。但是,我们观察到,使用未签名的监督可能会导致严重的歧义,并且通常会导致\ emph {意外}故障,例如在重建复杂的结构并与重建准确的表面斗争时,在自由空间中产生不希望的表面。为了重建一个更好的距离距离场,我们提出了半签名的神经拟合(SSN拟合),该神经拟合(SSN拟合)由半签名的监督和基于损失的区域采样策略组成。我们的关键见解是,签名的监督更具信息性,显然可以轻松确定对象之外的区域。同时,提出了一种新颖的重要性抽样,以加速优化并更好地重建细节。具体而言,我们将对象空间弹并分配到\ emph {sign-newand}和\ emph {sign-unawern}区域,其中应用了不同的监督。此外,我们根据跟踪的重建损失自适应地调整每个体素的采样率,以便网络可以更多地关注复杂的拟合不足区域。我们进行了广泛的实验,以证明SSN拟合在多个数据集的不同设置下实现最新性能,包括清洁,密度变化和嘈杂的数据。
translated by 谷歌翻译
3D面重建结果的评估通常取决于估计的3D模型和地面真相扫描之间的刚性形状比对。我们观察到,将两个形状与不同的参考点进行排列可以在很大程度上影响评估结果。这给精确诊断和改进3D面部重建方法带来了困难。在本文中,我们提出了一种新的评估方法,并采用了新的基准测试,包括100张全球对齐的面部扫描,具有准确的面部关键点,高质量的区域口罩和拓扑符合的网格。我们的方法执行区域形状比对,并导致计算形状误差期间更准确,双向对应关系。细粒度,区域评估结果为我们提供了有关最先进的3D面部重建方法表现的详细理解。例如,我们对基于单图像的重建方法的实验表明,DECA在鼻子区域表现最好,而Ganfit在脸颊区域的表现更好。此外,使用与我们构造的相同过程以对齐和重新构造几个3D面部数据集的新型和高质量的3DMM基础HIFI3D ++。我们将在https://realy3dface.com上发布真正的HIFI3D ++以及我们的新评估管道。
translated by 谷歌翻译
除了像素功能之外,还利用“类级”信息(例如OCR和CPNET)等最新的分割方法,在提高现有网络模块的准确性方面取得了显着的成功。但是,提取的类级信息简单地与像素功能相连,而无需明确利用以获得更好的像素表示学习。此外,这些方法基于粗蒙版预测来学习软类中心,这很容易积累错误。在本文中,旨在更有效地使用班级信息,我们提出了一种普遍的班级感知正规化(CAR)方法,以优化特征学习过程中的阶层内差异和类间距离,这是由于人类可以识别的事实而激发的。对象本身不管它出现哪个其他对象。提出了三个新颖的损失功能。第一个损失函数鼓励每个类中更紧凑的类表示,第二个损失函数直接最大化了不同类中心之间的距离,第三个进一步推动了班级中心和像素之间的距离。此外,我们方法中的班级中心是由地面真理直接产生的,而不是从容易出错的粗糙预测中产生。我们的方法可以轻松地应用于包括OCR和CPNET在内的大多数现有分割模型,并且在没有额外的推理开销的情况下可以在很大程度上提高其准确性。在多个基准数据集上进行的广泛实验和消融研究表明,所提出的汽车可以提高所有基线模型的准确性,高达2.23%MIOU,具有出色的概括能力。完整的代码可在https://github.com/edwardyehuang/car上找到。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Federated learning achieves joint training of deep models by connecting decentralized data sources, which can significantly mitigate the risk of privacy leakage. However, in a more general case, the distributions of labels among clients are different, called ``label distribution skew''. Directly applying conventional federated learning without consideration of label distribution skew issue significantly hurts the performance of the global model. To this end, we propose a novel federated learning method, named FedMGD, to alleviate the performance degradation caused by the label distribution skew issue. It introduces a global Generative Adversarial Network to model the global data distribution without access to local datasets, so the global model can be trained using the global information of data distribution without privacy leakage. The experimental results demonstrate that our proposed method significantly outperforms the state-of-the-art on several public benchmarks. Code is available at \url{https://github.com/Sheng-T/FedMGD}.
translated by 谷歌翻译