示例性类增量学习需要分类模型来逐步学习新的类知识,而无需保留任何旧样本。最近,基于并行单级分类器(POC)的框架,它为每个类别独立地列举单级分类器(OCC),引起了广泛的关注,因为它可以自然避免灾难性的遗忘。然而,由于其不同OOC的独立培训策略,POC遭受了弱歧视性和可比性。为满足这一挑战,我们提出了一个新的框架,命名为判别和可比单级分类器,用于增量学习(Discoil)。 Discoil遵循POC的基本原理,但它采用变分自动编码器(VAE)而不是其他良好的一流的单级分类器(例如,深度SVDD),因为训练VAE不仅可以识别属于输入样本的概率一个班级,但也会生成课程的伪样本,以协助学习新任务。通过这种优势,与旧级别的VAE相比,Discoil列举了一个新的VAE,这迫使新级VAE为新级样本重建,但对于旧级伪样本更糟糕,从而提高了可比性。此外,Discoil引入了铰链重建损失以确保辨别性。我们在MNIST,CIFAR10和TINY-ImageNet中广泛评估我们的方法。实验结果表明,Discoil实现了最先进的性能。
translated by 谷歌翻译
多模式情感分析由于其在多模式相互作用中的信息互补性而具有广泛的应用。以前的作品更多地着重于研究有效的联合表示,但他们很少考虑非峰值提取和多模层融合的数据冗余性的不足。在本文中,提出了一个基于视频的跨模式辅助网络(VCAN),该网络由音频特征映射模块和跨模式选择模块组成。第一个模块旨在大大提高音频功能提取的特征多样性,旨在通过提供更全面的声学表示来提高分类精度。为了授权该模型处理冗余视觉功能,第二个模块是在集成视听数据时有效地过滤冗余视觉框架的。此外,引入了由几个图像分类网络组成的分类器组,以预测情感极性和情感类别。关于RAVDESS,CMU-MOSI和CMU-MOSEI基准的广泛实验结果表明,VCAN明显优于提高多模式情感分析的分类准确性的最新方法。
translated by 谷歌翻译
大量人群遭受全世界认知障碍。认知障碍的早期发现对患者和护理人员来说都非常重要。然而,现有方法具有短缺,例如诊所和神经影像阶段参与的时间消耗和财务费用。已经发现认知障碍的患者显示出异常的情绪模式。在本文中,我们展示了一种新的深度卷积网络的系统,通过分析面部情绪的演变来检测认知障碍,而参与者正在观看设计的视频刺激。在我们所提出的系统中,使用来自MobileNet的层和支持向量机(SVM)的图层开发了一种新的面部表情识别算法,这在3个数据集中显示了令人满意的性能。为了验证拟议的检测认知障碍系统,已经邀请了61名老年人,包括认知障碍和健康人作为对照组的患者参加实验,并相应地建立了一个数据集。使用此数据集,所提出的系统已成功实现73.3%的检测精度。
translated by 谷歌翻译
FDG-PET揭示了具有轻度认知障碍(MCI)和Alzheimer疾病(AD)的个体的脑代谢改变。通过计算机辅助诊断(CAD)技术源自FDG-PET的一些生物标志物已被证明可以准确诊断正常控制(NC),MCI和AD。然而,使用FDG-PET图像鉴定早期MCI(EMCI)和晚期MCI(LMCI)的研究仍然不足。与基于FMRI和DTI图像的研究相比,FDG-PET图像中区域间表示特征的研究不足。此外,考虑到不同个体的可变性,一些与两个类非常相似的硬样品限制了分类性能。为了解决这些问题,本文提出了一种新的双线性池和度量学习网络(BMNet),其可以通过构造嵌入空间来提取区域间表示特征并区分硬样品。为了验证所提出的方法,我们从ADNI收集998个FDG-PET图像。在常见的预处理步骤之后,根据自动解剖地标(AAL)模板从每个FDG-PET图像中提取90个特征,然后被发送到所提出的网络。对多种两类分类进行了广泛的5倍交叉验证实验。实验表明,在向基线模型中添加双线性池模块和度量损耗后,大多数度量都会得到改善。具体而言,在EMCI和LMCI之间的分类任务中,在添加三维度量损失后,特异性提高了6.38%,并且使用双线性池模块后,负预测值(NPV)在3.45%后提高了3.45%。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Rankings are widely collected in various real-life scenarios, leading to the leakage of personal information such as users' preferences on videos or news. To protect rankings, existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $\epsilon$-differential privacy. This paper proposes a novel notion called $\epsilon$-ranking differential privacy for protecting ranks. We establish the connection between the Mallows model (Mallows, 1957) and the proposed $\epsilon$-ranking differential privacy. This allows us to develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $\epsilon$-ranking differential privacy. Theoretical results regarding the utility of synthetic rankings in the downstream tasks, including the inference attack and the personalized ranking tasks, are established. For the inference attack, we quantify how $\epsilon$ affects the estimation of the true ranking based on synthetic rankings. For the personalized ranking task, we consider varying privacy preferences among users and quantify how their privacy preferences affect the consistency in estimating the optimal ranking function. Extensive numerical experiments are carried out to verify the theoretical results and demonstrate the effectiveness of the proposed synthetic ranking algorithm.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译