通过移动激光扫描和图像构建有色点的云是测量和映射的基本工作。它也是为智能城市建造数字双胞胎的重要先决条件。但是,现有的公共数据集要么是相对较小的规模,要么缺乏准确的几何和彩色地面真理。本文记录了一个名为Polyu-BPComa的多功能数据集,该数据集可独特地定位于移动着色映射。该数据集在背包平台上包含3D激光雷达,球形成像,GNSS和IMU的资源。颜色检查器板在每个调查区域粘贴,因为目标和地面真相数据是由先进的陆地激光扫描仪(TLS)收集的。 3D几何信息和颜色信息可以分别在背包系统和TLS产生的有色点云中恢复。因此,我们提供了一个机会,可以同时为移动多感官系统对映射和着色精度进行基准测试。该数据集的尺寸约为800 GB,涵盖室内和室外环境。数据集和开发套件可在https://github.com/chenpengxin/polyu-bpcoma.git上找到。
translated by 谷歌翻译
总生存时间(OS)时间是神经胶质瘤情况最重要的评估指数之一。多模式磁共振成像(MRI)扫描在神经胶质瘤预后OS时间的研究中起重要作用。为多模式MRI问题的OS时间预测提出了几种基于学习的方法。但是,这些方法通常在深度学习网络开始或结束时融合多模式信息,并且缺乏来自不同尺度的特征。此外,网络末尾的融合始终适应全球(例如,在全球平均池输出串联后完全连接)或与局部(例如,双线性池)的融合,这会失去与全球局部的局部信息。在本文中,我们提出了一种用于对脑肿瘤患者的多模式OS时间预测的新方法,该方法包含在不同尺度上引入的改进的非局部特征融合模块。我们的方法比当前最新方法获得了相对8.76%的改善(0.6989 vs. 0.6426的精度)。广泛的测试表明,我们的方法可以适应缺失方式的情况。该代码可在https://github.com/tangwen920812/mmmna-net上找到。
translated by 谷歌翻译
在临床实践中,由于较短的获取时间和较低的存储成本,通常使用了平面分辨率低的各向异性体积医学图像。然而,粗分辨率可能导致医生或计算机辅助诊断算法的医学诊断困难。基于深度学习的体积超分辨率(SR)方法是改善分辨率的可行方法,其核心是卷积神经网络(CNN)。尽管进展最近,但这些方法受到卷积运算符的固有属性的限制,卷积运算符忽略内容相关性,无法有效地对远程依赖性进行建模。此外,大多数现有方法都使用伪配合的体积进行训练和评估,其中伪低分辨率(LR)体积是通过简单的高分辨率(HR)对应物的简单降解而产生的。但是,伪和现实LR之间的域间隙导致这些方法在实践中的性能不佳。在本文中,我们构建了第一个公共实用数据集RPLHR-CT作为体积SR的基准,并通过重新实现四种基于CNN的最先进的方法来提供基线结果。考虑到CNN的固有缺点,我们还提出了基于注意力机制的变压器体积超分辨率网络(TVSRN),完全与卷积分配。这是首次将纯变压器用于CT体积SR的研究。实验结果表明,TVSRN在PSNR和SSIM上的所有基准都显着胜过。此外,TVSRN方法在图像质量,参数数量和运行时间之间取得了更好的权衡。数据和代码可在https://github.com/smilenaxx/rplhr-ct上找到。
translated by 谷歌翻译
通过纵向病变跟踪评估病变进展和治疗反应在临床实践中起着至关重要的作用。当手动进行病变匹配时,该任务的自动化方法是由劳动力成本和时间消耗的促进的。以前的方法通常缺乏本地和全球信息的集成。在这项工作中,我们提出了一种基于变压器的方法,称为变压器病变跟踪器(TLT)。具体而言,我们设计了一个基于注意力的变压器(CAT),以捕获和组合全球和本地信息以增强特征提取。我们还开发了一个基于注册的解剖注意模块(RAAM),以向CAT介绍解剖信息,以便它可以专注于有用的特征知识。提出了一种稀疏选择策略(SSS),用于选择特征和减少变压器训练中的内存足迹。此外,我们使用全球回归来进一步提高模型性能。我们在公共数据集上进行实验,以显示我们方法的优势,并发现我们的模型性能使欧几里得中心的平均误差至少提高了至少14.3%(6mm vs. 7mm),而不是先进的ART(SOTA) )。代码可在https://github.com/tangwen920812/tlt上找到。
translated by 谷歌翻译
近年来,由于许多应用中的良好性能,多任务学习(MTL)引起了很多关注。但是,许多现有的MTL模型不能保证其性能不会比每项任务的单一任务对应物更糟糕。虽然这些现象已经被一些作品经验识别,但很少的工作旨在处理所产生的问题,这在本文中正式定义为负分享。为了实现安全的多任务学习,在没有\ texit {否定共享}的情况下,我们提出了一个安全的多任务学习(SMTL)模型,它由所有任务,私人编码器,门和私有解码器共享的公共编码器组成。具体而言,每个任务都有私人编码器,门和私有解码器,其中门是学习如何将私人编码器和公共编码器组合到下游私有解码器。为了减少推理阶段期间的存储成本,提出了一种Lite版本的SMTL,以允许大门选择公共编码器或相应的私人编码器。此外,我们提出了一种SMT1的变体来放置所有任务的解码后的所有门。几个基准数据集的实验证明了所提出的方法的有效性。
translated by 谷歌翻译
公平的聚类旨在将数据分为不同的簇,同时防止敏感属性(例如性别,种族,RNA测序技术),而不是主导聚类。尽管最近已经进行了许多作品并取得了巨大的成功,但其中大多数是启发式的,并且缺乏算法设计的统一理论。在这项工作中,我们通过开发一种相互信息理论来填补这一空白,以实现深度公平的聚类,并因此设计出一种称为FCMI的新型算法。简而言之,通过最大化和最大程度地减少共同信息,FCMI旨在通过深度公平的聚类(即紧凑,平衡和公平的簇)以及信息丰富的特征来实现四种特征。除了对理论和算法的贡献外,这项工作的另一个贡献是提出了一个基于信息理论的新颖的公平聚类指标。与现有的评估指标不同,我们的指标以整体而不是单独的方式来衡量聚类的质量和公平性。为了验证拟议的FCMI的有效性,我们对六个基准进行了实验,包括单细胞RNA-seq Atlas,而与11种最先进的方法相比,就五个指标而言。认可后将发布代码。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译