Existing neural rendering methods for creating human avatars typically either require dense input signals such as video or multi-view images, or leverage a learned prior from large-scale specific 3D human datasets such that reconstruction can be performed with sparse-view inputs. Most of these methods fail to achieve realistic reconstruction when only a single image is available. To enable the data-efficient creation of realistic animatable 3D humans, we propose ELICIT, a novel method for learning human-specific neural radiance fields from a single image. Inspired by the fact that humans can easily reconstruct the body geometry and infer the full-body clothing from a single image, we leverage two priors in ELICIT: 3D geometry prior and visual semantic prior. Specifically, ELICIT introduces the 3D body shape geometry prior from a skinned vertex-based template model (i.e., SMPL) and implements the visual clothing semantic prior with the CLIP-based pre-trained models. Both priors are used to jointly guide the optimization for creating plausible content in the invisible areas. In order to further improve visual details, we propose a segmentation-based sampling strategy that locally refines different parts of the avatar. Comprehensive evaluations on multiple popular benchmarks, including ZJU-MoCAP, Human3.6M, and DeepFashion, show that ELICIT has outperformed current state-of-the-art avatar creation methods when only a single image is available. Code will be public for reseach purpose at https://elicit3d.github.io .
translated by 谷歌翻译
Utilities of knowledge graphs (KGs) depend on their qualities. A KG that is of poor quality not only has little applicability but also leads to some unexpected errors. Therefore, quality evaluation for KGs is crucial and indispensable. Existing methods design many quality dimensions and calculate metrics in the corresponding dimensions based on details (i.e., raw data and graph structures) of KGs for evaluation. However, there are two major issues. On one hand, they consider the details as public information, which exposes the raw data and graph structures. These details are strictly confidential because they involve commercial privacy or others in practice. On the other hand, the existing methods focus on how much knowledge KGs have rather than KGs' practicability. To address the above problems, we propose a knowledge graph quality evaluation framework under incomplete information (QEII). The quality evaluation problem is transformed into an adversarial game, and the relative quality is evaluated according to the winner and loser. Participants of the game are KGs, and the adversarial gameplay is to question and answer (Q&A). In the QEII, we generate and train a question model and an answer model for each KG. The question model of a KG first asks a certain number of questions to the other KG. Then it evaluates the answers returned by the answer model of the other KG and outputs a percentage score. The relative quality is evaluated by the scores, which measures the ability to apply knowledge. Q&A messages are the only information that KGs exchange, without exposing any raw data and graph structure. Experimental results on two pairs of KGs demonstrate that, comparing with baselines, the QEII realizes a reasonable quality evaluation from the perspective of third-party evaluators under incomplete information.
translated by 谷歌翻译
Multimodal named entity recognition (MNER) and multimodal relation extraction (MRE) are two fundamental subtasks in the multimodal knowledge graph construction task. However, the existing methods usually handle two tasks independently, which ignores the bidirectional interaction between them. This paper is the first to propose jointly performing MNER and MRE as a joint multimodal entity-relation extraction task (JMERE). Besides, the current MNER and MRE models only consider aligning the visual objects with textual entities in visual and textual graphs but ignore the entity-entity relationships and object-object relationships. To address the above challenges, we propose an edge-enhanced graph alignment network and a word-pair relation tagging (EEGA) for JMERE task. Specifically, we first design a word-pair relation tagging to exploit the bidirectional interaction between MNER and MRE and avoid the error propagation. Then, we propose an edge-enhanced graph alignment network to enhance the JMERE task by aligning nodes and edges in the cross-graph. Compared with previous methods, the proposed method can leverage the edge information to auxiliary alignment between objects and entities and find the correlations between entity-entity relationships and object-object relationships. Experiments are conducted to show the effectiveness of our model.
translated by 谷歌翻译
Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel open-vocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS. Code is available at: https://github.com/clin1223/VLDet.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
Synthetic datasets are often used to pretrain end-to-end optical flow networks, due to the lack of a large amount of labeled, real-scene data. But major drops in accuracy occur when moving from synthetic to real scenes. How do we better transfer the knowledge learned from synthetic to real domains? To this end, we propose CLIP-FLow, a semi-supervised iterative pseudo-labeling framework to transfer the pretraining knowledge to the target real domain. We leverage large-scale, unlabeled real data to facilitate transfer learning with the supervision of iteratively updated pseudo-ground truth labels, bridging the domain gap between the synthetic and the real. In addition, we propose a contrastive flow loss on reference features and the warped features by pseudo ground truth flows, to further boost the accurate matching and dampen the mismatching due to motion, occlusion, or noisy pseudo labels. We adopt RAFT as the backbone and obtain an F1-all error of 4.11%, i.e. a 19% error reduction from RAFT (5.10%) and ranking 2$^{nd}$ place at submission on the KITTI 2015 benchmark. Our framework can also be extended to other models, e.g. CRAFT, reducing the F1-all error from 4.79% to 4.66% on KITTI 2015 benchmark.
translated by 谷歌翻译
提供质量恒定流可以同时保证用户体验并防止浪费位率。在本文中,我们提出了一种基于深度学习的新型两通编码器参数预测框架来决定速率因子(RF),编码器可以通过恒定质量输出流。对于视频中的每个单发段,提出的方法首先通过超快速预处理提取空间,时间和预编码功能。基于这些功能,深度神经网络预测了RF参数。视频编码器使用RF作为第一个编码通过来压缩段。然后测量第一个通过编码的VMAF质量。如果质量不符合目标,将执行第二通过的RF预测和编码。借助第一次通过预测的RF和相应的实际质量作为反馈,第二次通过预测将非常准确。实验表明,所提出的方法仅需要平均编码复杂性的1.55倍,同时准确性,压缩视频的实际VMAF在目标VMAF附近的$ \ pm1 $之内,达到98.88%。
translated by 谷歌翻译
多视图无监督的特征选择(MUF)已被证明是一种有效的技术,可降低多视图未标记数据的维度。现有方法假定所有视图都已完成。但是,多视图数据通常不完整,即,某些视图中显示了一部分实例,但并非所有视图。此外,学习完整的相似性图,作为现有MUFS方法中重要的有前途的技术,由于缺少的观点而无法实现。在本文中,我们提出了一个基于互补的和共识学习的不完整的多视图无监督的特征选择方法(C $^{2} $ IMUFS),以解决上述问题。具体而言,c $^{2} $ imufs将功能选择集成到扩展的加权非负矩阵分解模型中,配备了自适应学习视图和稀疏的$ \ ell_ {2,p} $ - norm-norm,它可以提供更好的提供适应性和灵活性。通过从不同视图得出的多个相似性矩阵的稀疏线性组合,介绍了互补学习引导的相似性矩阵重建模型,以在每个视图中获得完整的相似性图。此外,c $^{2} $ imufs学习了跨不同视图的共识聚类指示器矩阵,并将其嵌入光谱图术语中以保留本地几何结构。现实世界数据集的全面实验结果证明了与最新方法相比,C $^{2} $ IMUF的有效性。
translated by 谷歌翻译
尽管不断努力提高代码搜索的有效性和效率,但仍未解决两个问题。首先,编程语言具有固有的牢固结构链接,并且代码的特征是文本表单将省略其中包含的结构信息。其次,代码和查询之间存在潜在的语义关系,跨序列对齐代码和文本是具有挑战性的,因此在相似性匹配期间,向量在空间上保持一致。为了解决这两个问题,在本文中,提出了一个名为CSSAM的代码搜索模型(代码语义和结构注意匹配)。通过引入语义和结构匹配机制,CSSAM有效提取并融合了多维代码功能。具体而言,开发了交叉和残留层,以促进代码和查询的高纬度空间比对。通过利用残差交互,匹配模块旨在保留更多的代码语义和描述性功能,从而增强了代码及其相应查询文本之间的附着力。此外,为了提高模型对代码固有结构的理解,提出了一个名为CSRG的代码表示结构(代码语义表示图),用于共同表示抽象语法树节点和代码的数据流。根据两个包含540K和330K代码段的公开可用数据集的实验结果,CSSAM在两个数据集中分别在获得最高的SR@1/5/10,MRR和NDCG@50方面大大优于基本线。此外,进行消融研究是为了定量衡量CSSAM每个关键组成部分对代码搜索效率和有效性的影响,这为改进高级代码搜索解决方案提供了见解。
translated by 谷歌翻译
通过利用未标记的对话框数据来开发半监督的面向任务的对话框(TOD)系统已吸引了越来越多的兴趣。对于对潜在状态TOD模型的半监督学习,经常使用变异学习,但遭受了通过离散潜在变量传播的梯度的令人讨厌的高度变化,以及间接优化目标对数的弊端。最近,一种称为关节随机近似(JSA)的替代算法已出现,用于学习具有令人印象深刻的性能的离散潜在可变模型。在本文中,我们建议将JSA应用于对潜在状态TOD模型的半监督学习,该模型称为JSA-TOD。据我们所知,JSA-TOD代表了开发基于JSA的半监督学习的第一批工作,用于对TOD系统(例如TOD系统)这样的长期顺序生成问题的离散潜在可变条件模型。广泛的实验表明,JSA-TOD明显优于其变异学习对应物。值得注意的是,使用20%标签的半监督JSA-TOD在Multiwoz2.1上的全面监督基线附近。
translated by 谷歌翻译