Along with the widespread use of face recognition systems, their vulnerability has become highlighted. While existing face anti-spoofing methods can be generalized between attack types, generic solutions are still challenging due to the diversity of spoof characteristics. Recently, the spoof trace disentanglement framework has shown great potential for coping with both seen and unseen spoof scenarios, but the performance is largely restricted by the single-modal input. This paper focuses on this issue and presents a multi-modal disentanglement model which targetedly learns polysemantic spoof traces for more accurate and robust generic attack detection. In particular, based on the adversarial learning mechanism, a two-stream disentangling network is designed to estimate spoof patterns from the RGB and depth inputs, respectively. In this case, it captures complementary spoofing clues inhering in different attacks. Furthermore, a fusion module is exploited, which recalibrates both representations at multiple stages to promote the disentanglement in each individual modality. It then performs cross-modality aggregation to deliver a more comprehensive spoof trace representation for prediction. Extensive evaluations are conducted on multiple benchmarks, demonstrating that learning polysemantic spoof traces favorably contributes to anti-spoofing with more perceptible and interpretable results.
translated by 谷歌翻译
神经网络(NNS)的能力在顺序地学习和记住多项任务是由于其灾难性遗忘(CF)问题而在实现一般人工智能方面面临艰难的挑战。幸运的是,最新的OWM正交权重修改)和其他几种连续学习(CL)方法表明了一些有希望的克服CF问题的方法。但是,现有的CL方法都没有探讨以下三个关键问题,以便有效地克服CF问题:即,它有助于在其顺序任务学习期间对NN的有效权重修改有所了解?当新学习任务的数据分布与先前学习的任务相对应的更改时,是否应该采用统一/特定的权重修改策略?对于给定的CL方法,可学习任务的上限是什么? ect。为了实现这一点,在本文中,我们首先揭示了新的学习任务的权重梯度的事实是由新任务的输入空间和先前学习任务的重量空间顺序确定。在这种观察和递归最小二乘法的情况下,我们通过增强型OWM提出了一种新的高效和有效的连续学习方法EOWM。我们理论上和明确地赋予了我们的EOWM的学习任务的上限。在基准测试上进行的广泛实验表明,我们的EOWM是有效性,优于所有最先进的CL基线。
translated by 谷歌翻译
时尚兼容性模型使在线零售商可以轻松获得质量良好的大量服装作品。但是,有效的时尚建议需要更深入的时尚认知,为每个客户提供精确的服务。在本文中,我们进行了有关时尚认知学习的首次研究,这是以个人物理信息为条件的时尚建议。为此,我们提出了一个时尚认知网络(FCN),以了解服装组成的视觉语义嵌入和个人外观特征之间的关系。 FCN包含两个子模块,即装备编码器和多标签图神经网络(ML-GCN)。服装编码器使用卷积层将衣服编码到服装嵌入中。后一个模块通过堆叠的GCN学习标签分类器。我们对新收集的O4U数据集进行了广泛的实验,结果提供了有力的定性和定量证据,使我们的框架优于替代方法。
translated by 谷歌翻译
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks. Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space. However, people discovered that this underlying assumption makes the current fusion framework infeasible to produce any prediction when there is a LiDAR malfunction, regardless of minor or major. This fundamentally limits the deployment capability to realistic autonomous driving scenarios. In contrast, we propose a surprisingly simple yet novel fusion framework, dubbed BEVFusion, whose camera stream does not depend on the input of LiDAR data, thus addressing the downside of previous methods. We empirically show that our framework surpasses the state-of-the-art methods under the normal training settings. Under the robustness training settings that simulate various LiDAR malfunctions, our framework significantly surpasses the state-of-the-art methods by 15.7% to 28.9% mAP. To the best of our knowledge, we are the first to handle realistic LiDAR malfunction and can be deployed to realistic scenarios without any post-processing procedure. The code is available at https://github.com/ADLab-AutoDrive/BEVFusion.
translated by 谷歌翻译
在本文中,我们首先尝试调查深度哈希学习与车辆重新识别的集成。我们提出了一个深度哈希的车辆重新识别框架,被称为DVHN,这基本上减少了存储器使用,并在预留最接近的邻居搜索精度的同时提高检索效率。具体地,〜DVHN通过联合优化特征学习网络和哈希码生成模块,直接为每个图像直接学习离散的紧凑型二进制哈希代码。具体地,我们直接将来自卷积神经网络的输出限制为离散二进制代码,并确保学习的二进制代码是对分类的最佳选择。为了优化深度离散散列框架,我们进一步提出了一种用于学习二进制相似性保存散列代码的交替最小化方法。在两个广泛研究的车辆重新识别数据集 - \ textbf {sportid}和\ textbf {veri} - 〜〜\ textbf {veri} - 〜已经证明了我们对最先进的深哈希方法的方法的优越性。 2048美元的TextBF {DVHN}价格可以实现13.94 \%和10.21 \%的准确性改进\ textbf {map}和\ textbf {stuckbf {stank @ 1}的\ textbf {stuckid(800)} dataSet。对于\ textbf {veri},我们分别实现了35.45 \%和32.72 \%\ textbf {rank @ 1}和\​​ textbf {map}的性能增益。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译