If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5?" The answer is probably a "No."Most existing face aging works attempt to learn the transformation between age groups and thus would require the paired samples as well as the labeled query image. In this paper, we look at the problem from a generative modeling perspective such that no paired samples is required. In addition, given an unlabeled image, the generative model can directly produce the image with desired age attribute. We propose a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In CAAE, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator. The latent vector preserves personalized face features (i.e., personality) and the age condition controls progression vs. regression. Two adversarial networks are imposed on the encoder and generator, respectively, forcing to generate more photo-realistic faces. Experimental results demonstrate the appealing performance and flexibility of the proposed framework by comparing with the state-of-the-art and ground truth.
translated by 谷歌翻译
Interoperability issue is a significant problem in Building Information Modeling (BIM). Object type, as a kind of critical semantic information needed in multiple BIM applications like scan-to-BIM and code compliance checking, also suffers when exchanging BIM data or creating models using software of other domains. It can be supplemented using deep learning. Current deep learning methods mainly learn from the shape information of BIM objects for classification, leaving relational information inherent in the BIM context unused. To address this issue, we introduce a two-branch geometric-relational deep learning framework. It boosts previous geometric classification methods with relational information. We also present a BIM object dataset IFCNet++, which contains both geometric and relational information about the objects. Experiments show that our framework can be flexibly adapted to different geometric methods. And relational features do act as a bonus to general geometric learning methods, obviously improving their classification performance, thus reducing the manual labor of checking models and improving the practical value of enriched BIM models.
translated by 谷歌翻译
The effective receptive field of a fully convolutional neural network is an important consideration when designing an architecture, as it defines the portion of the input visible to each convolutional kernel. We propose a neural network module, extending traditional skip connections, called the translated skip connection. Translated skip connections geometrically increase the receptive field of an architecture with negligible impact on both the size of the parameter space and computational complexity. By embedding translated skip connections into a benchmark architecture, we demonstrate that our module matches or outperforms four other approaches to expanding the effective receptive fields of fully convolutional neural networks. We confirm this result across five contemporary image segmentation datasets from disparate domains, including the detection of COVID-19 infection, segmentation of aerial imagery, common object segmentation, and segmentation for self-driving cars.
translated by 谷歌翻译
最近,对深度学习进行了广泛的研究,以加速动态磁共振(MR)成像,并取得了令人鼓舞的进步。但是,如果没有完全采样的参考数据进行培训,当前方法可能在恢复细节或结构方面具有有限的能力。为了应对这一挑战,本文提出了一个自我监督的协作学习框架(SelfCollearn),以从无效的K-Space数据中进行准确的动态MR图像重建。拟议的框架配备了三个重要组成部分,即双网络协作学习,重新启动数据增强和专门设计的共同培训损失。该框架可以灵活地与数据驱动的网络和基于模型的迭代未滚动网络集成。我们的方法已在体内数据集上进行了评估,并将其与四种最新方法进行了比较。结果表明,我们的方法具有很强的能力,可以从无效的K空间数据捕获直接重建的基本和固有表示形式,因此可以实现高质量且快速的动态MR成像。
translated by 谷歌翻译
在神经影像分析中,功能磁共振成像(fMRI)可以很好地评估没有明显结构病变的脑疾病的大脑功能变化。到目前为止,大多数基于研究的FMRI研究将功能连接性作为疾病分类的基本特征。但是,功能连接通常是根据感兴趣的预定义区域的时间序列计算的,并忽略了每个体素中包含的详细信息,这可能会导致诊断模型的性能恶化。另一个方法论上的缺点是训练深模型的样本量有限。在这项研究中,我们提出了Brainformer,这是一种用于单个FMRI体积的脑疾病分类的一般混合变压器架构,以充分利用素食细节,并具有足够的数据尺寸和尺寸。脑形形式是通过对每个体素内的局部提示进行建模的3D卷积,并捕获两个全球注意力障碍的遥远地区之间的全球关系。局部和全局线索通过单流模型在脑形中汇总。为了处理多站点数据,我们提出了一个归一化层,以将数据标准化为相同的分布。最后,利用一种基于梯度的定位图可视化方法来定位可能的疾病相关生物标志物。我们在五个独立获取的数据集上评估了脑形形成器,包括Abide,ADNI,MPILMBB,ADHD-200和ECHO,以及自闭症疾病,阿尔茨海默氏病,抑郁症,注意力缺陷多动障碍和头痛疾病。结果证明了脑形对多种脑疾病的诊断的有效性和普遍性。脑形物可以在临床实践中促进基于神经成像的精确诊断,并激励FMRI分析中的未来研究。代码可在以下网址获得:https://github.com/ziyaozhangforpcl/brainformer。
translated by 谷歌翻译
射频和深度学习在自动胶质瘤分级中显示出很高的普及。辐射瘤可以提取手工制作的特征,定量描述胶质瘤等级的专家知识,深度学习在提取促进最终分类的大量高吞吐量功能方面是强大的。然而,随着它们的互补优势尚未充分调查和整合,仍然可以提高现有方法的性能。此外,通常需要病变图来进行测试阶段的最终预测,这是非常麻烦的。在本文中,我们提出了专业知识引导的几何表示学习(录音)框架。手工制作功能和学习特征的几何歧管构建为挖掘深度学习和辐射族之间的隐性关系,从而挖掘相互同意和胶质瘤等级的必要表现。通过专门设计的歧管差异测量,分级模型可以更有效地利用输入图像数据和专家知识,并在测试阶段摆脱病变分段图的要求。拟议的框架是关于要使用的深度学习架构的灵活性。已经评估了三种不同的架构,并比较了五种模型,表明我们的框架总能产生有前途的结果。
translated by 谷歌翻译
前列腺成像报告和数据系统(PI-RAD)基于多参数MRI类\ ^ EES患者分为5类(PI-RADS 1-5),用于常规临床诊断指导。但是,无论pi-rads 3患者是否应该经过活组织检查,都没有共识。这些硬样品(HS)的采矿功能对于医生来说是有意义的,以实现准确的诊断。目前,HS Biomarkers的采矿是Insu \`的,并且HS Biomarkers用于前列腺癌诊断的e \'助力性和稳健性尚未探讨。在这项研究中,构建了来自DI \'EERENT数据分布的生物标志物。结果表明,HS Biomarkers可以在DI \'EERENT数据分布中实现更好的性能。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译