基于视频的远程生理测量利用面部视频来测量血量变化信号,这也称为远程光摄影学(RPPG)。 RPPG测量的监督方法达到了最新的性能。但是,有监督的RPPG方法需要面部视频和地面真理生理信号进行模型培训。在本文中,我们提出了一种无监督的RPPG测量方法,该方法不需要地面真相信号进行培训。我们使用3DCNN模型在不同的时空位置中从每个视频中生成多个RPPG信号,并以对比度损失训练模型,其中将来自同一视频的RPPG信号汇总在一起,而来自不同视频的那些视频则被推开。我们在五个公共数据集上测试,包括RGB视频和NIR视频。结果表明,我们的方法优于先前的无监督基线,并在所有五个数据集上实现了非常接近当前最佳监督RPPG方法的精度。此外,我们还证明了我们的方法可以以更快的速度运行,并且比以前的无监督基线更强大。我们的代码可在https://github.com/zhaodongsun/contrast-phys上找到。
translated by 谷歌翻译
目的:我们提出了一种从面部视频中检测到房颤(AF)检测的非接触式方法。方法:记录了100名健康受试者和100名AF患者的面部视频,心电图(ECG)和接触光摄影(PPG)。来自健康受试者的数据记录都被标记为健康。两名心脏病专家评估了患者的心电图记录,并将每种记录标记为AF,窦性心律(SR)或心房颤动(AFL)。我们使用3D卷积神经网络进行远程PPG监测,并提出了新的损耗函数(Wasserstein距离),以使用接触PPG的收缩峰的时间作为我们的模型训练的标签。然后,根据beat间隔计算一组心率变异性(HRV)功能,并使用HRV功能训练支持向量机(SVM)分类器。结果:我们提出的方法可以准确地从面部视频中提取收缩峰以进行AF检测。提出的方法通过与30s视频剪辑的10倍交叉验证进行了训练,并在两个任务上进行了测试。 1)健康与AF的分类:准确性,灵敏度和特异性为96.00%,95.36%和96.12%。 2)SR与AF的分类:准确性,灵敏度和特异性为95.23%,98.53%和91.12%。此外,我们还证明了非接触式AFL检测的可行性。结论:我们通过学习收缩峰来实现非接触AF检测的良好性能。显着性:非接触性AF检测可用于自我筛查,可疑在家中可疑人群或治疗慢性患者治疗后自我监控。
translated by 谷歌翻译
This paper creates a novel method of deep neural style transfer by generating style images from freeform user text input. The language model and style transfer model form a seamless pipeline that can create output images with similar losses and improved quality when compared to baseline style transfer methods. The language model returns a closely matching image given a style text and description input, which is then passed to the style transfer model with an input content image to create a final output. A proof-of-concept tool is also developed to integrate the models and demonstrate the effectiveness of deep image style transfer from freeform text.
translated by 谷歌翻译
变压器验证引起了机器学习研究和行业的越来越多的关注。它正式验证了变压器对对抗性攻击的鲁棒性,例如用同义词交换单词。但是,由于以中线为中心的计算,变压器验证的性能仍然不令人满意,这与标准神经网络有显着差异。在本文中,我们提出了信仰,这是用于GPU的变压器验证的有效框架。我们首先提出一个语义意识的计算图转换,以识别语义信息,例如变压器验证中的结合计算。我们利用此类语义信息,以在计算图级别启用有效的内核融合。其次,我们提出了一个验证专门的内核手工艺品,以有效地将变压器验证映射到现代GPU。该手工艺者利用了一组GPU硬件支持,以加速通常是内存密集型的验证专业操作。第三,我们提出了一个专家指导的自动调整,以纳入有关GPU后端的专家知识,以促进大型搜索空间探索。广泛的评估表明,Faith在最先进的框架上实现了$ 2.1 \ times $至$ 3.4 \ times $($ 2.6 \ times $)的加速。
translated by 谷歌翻译
自我介绍在训练过程中利用自身的非均匀软监管,并在没有任何运行时成本的情况下提高性能。但是,在训练过程中的开销经常被忽略,但是在巨型模型的时代,培训期间的时间和记忆开销越来越重要。本文提出了一种名为ZIPF标签平滑(ZIPF的LS)的有效自我验证方法,该方法使用网络的直立预测来生成软监管,该软监管在不使用任何对比样本或辅助参数的情况下符合ZIPF分布。我们的想法来自经验观察,即当对网络进行适当训练时,在按样品的大小和平均分类后,应遵循分布的分布,让人联想到ZIPF的自然语言频率统计信息,这是在按样品中的大小和平均值进行排序之后进行的。 。通过在样本级别和整个培训期内强制执行此属性,我们发现预测准确性可以大大提高。使用INAT21细粒分类数据集上的RESNET50,与香草基线相比,我们的技术获得了 +3.61%的准确性增长,而与先前的标签平滑或自我验证策略相比,增益增加了0.88%。该实现可在https://github.com/megvii-research/zipfls上公开获得。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Rankings are widely collected in various real-life scenarios, leading to the leakage of personal information such as users' preferences on videos or news. To protect rankings, existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $\epsilon$-differential privacy. This paper proposes a novel notion called $\epsilon$-ranking differential privacy for protecting ranks. We establish the connection between the Mallows model (Mallows, 1957) and the proposed $\epsilon$-ranking differential privacy. This allows us to develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $\epsilon$-ranking differential privacy. Theoretical results regarding the utility of synthetic rankings in the downstream tasks, including the inference attack and the personalized ranking tasks, are established. For the inference attack, we quantify how $\epsilon$ affects the estimation of the true ranking based on synthetic rankings. For the personalized ranking task, we consider varying privacy preferences among users and quantify how their privacy preferences affect the consistency in estimating the optimal ranking function. Extensive numerical experiments are carried out to verify the theoretical results and demonstrate the effectiveness of the proposed synthetic ranking algorithm.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译