Although augmentations (e.g., perturbation of graph edges, image crops) boost the efficiency of Contrastive Learning (CL), feature level augmentation is another plausible, complementary yet not well researched strategy. Thus, we present a novel spectral feature argumentation for contrastive learning on graphs (and images). To this end, for each data view, we estimate a low-rank approximation per feature map and subtract that approximation from the map to obtain its complement. This is achieved by the proposed herein incomplete power iteration, a non-standard power iteration regime which enjoys two valuable byproducts (under mere one or two iterations): (i) it partially balances spectrum of the feature map, and (ii) it injects the noise into rebalanced singular values of the feature map (spectral augmentation). For two views, we align these rebalanced feature maps as such an improved alignment step can focus more on less dominant singular values of matrices of both views, whereas the spectral augmentation does not affect the spectral angle alignment (singular vectors are not perturbed). We derive the analytical form for: (i) the incomplete power iteration to capture its spectrum-balancing effect, and (ii) the variance of singular values augmented implicitly by the noise. We also show that the spectral augmentation improves the generalization bound. Experiments on graph/image datasets show that our spectral feature augmentation outperforms baselines, and is complementary with other augmentation strategies and compatible with various contrastive losses.
translated by 谷歌翻译
协作感知最近显示出具有对单一主体感知的感知能力的巨大潜力。现有的协作感知方法通常考虑理想的交流环境。但是,实际上,通信系统不可避免地遭受了延迟问题,从而导致潜在的性能降解和安全关键应用程序(例如自动驾驶)的高风险。从机器学习的角度来看,为了减轻不可避免的沟通潜伏期造成的效果,我们提出了第一个延迟感知的协作感知系统,该系统积极采用从多个代理到同一时间戳的异步感知特征,从而促进了协作的稳健性和有效性。为了实现此类特征级别的同步,我们提出了一个新型的延迟补偿模块,称为Syncnet,该模块利用特征注意的共生估计和时间调制技术。实验结果表明,在最新的协作感知数据集V2X-SIM上,我们的方法优于最先进的协作感知方法15.6%。
translated by 谷歌翻译
我们提出了一种新型的动态约束不确定性加权损失,以实验处理平衡多个任务对ICML EXVO 2022挑战的贡献的问题。多任务旨在共同认识到声乐爆发中表达的情绪和人口特征。我们的策略结合了不确定性重量和平均动态重量的优势,通过用约束术语扩展权重以使学习过程更具解释。我们使用轻巧的多EXIT CNN体系结构来实施我们提出的损失方法。实验性H-均值得分(0.394)显示出比基线H均值得分的显着改善(0.335)。
translated by 谷歌翻译
图对比度学习(GCL)改善了图表的学习,从而导致SOTA在各种下游任务上。图扩大步骤是GCL的重要但几乎没有研究的步骤。在本文中,我们表明,通过图表增强获得的节点嵌入是高度偏差的,在某种程度上限制了从学习下游任务的学习区分特征的对比模型。隐藏功能(功能增强)。受到所谓矩阵草图的启发,我们提出了Costa,这是GCL的一种新颖的协变功能空间增强框架,该框架通过维护原始功能的``好草图''来生成增强功能。为了强调Costa的特征增强功能的优势,我们研究了一个保存记忆和计算的单视图设置(除了多视图ONE)。我们表明,与基于图的模型相比,带有Costa的功能增强功能可比较/更好。
translated by 谷歌翻译
多代理协作感知可以通过使代理商能够通过交流相互共享互补信息来显着升级感知表现。它不可避免地会导致感知表现与沟通带宽之间的基本权衡。为了解决这个瓶颈问题,我们提出了一个空间置信度图,该图反映了感知信息的空间异质性。它使代理只能在空间上共享稀疏而感知的关键信息,从而有助于沟通。基于这张新型的空间置信度图,我们提出了2Comm,即沟通有效的协作感知框架。其中2Comm具有两个不同的优势:i)它考虑了实用的压缩,并使用较少的沟通来通过专注于感知至关重要的领域来实现更高的感知表现; ii)它可以通过动态调整涉及通信的空间区域来处理不同的通信带宽。要评估2comm的位置,我们考虑了在现实世界和模拟方案中使用两种模式(相机/激光镜头)和两种代理类型(CAR/无人机)的3D对象检测:OPV2V,v2x-sim,dair-v2x和我们的原始的Coperception-uavs。其中2comm始终优于先前的方法;例如,它实现了超过$ 100,000 \ times $较低的通信量,并且在OPV2V上仍然优于脱颖而出和v2x-vit。我们的代码可在https://github.com/mediabrain-sjtu/where2comm上找到。
translated by 谷歌翻译
概念相关性估计(CRE)任务是确定两个给定的概念是否相关。尽管可以轻松适应此任务的语义文本相似性(STS)任务的现有方法,但CRE任务具有一些独特的属性,可以利用这些属性来扩大数据集以解决其数据稀缺问题。在本文中,我们构造了一个名为CycreteGraph(概念相关性估计图)的图,以利用CRE属性。对于从混凝土图中采样的新概念对,我们添加了一个额外的步骤,以基于简单但有效的质量阈值来滤除低质量的新概念对。我们将ConcreteGraph数据扩展应用于三个基于变压器的模型以显示其功效。详细的消融研究用于质量阈值进一步表明,即使有限的高质量数据也比大量未替代数据更有益。本文是第一个在数据集上使用的文章,而建议的具体图可以提高变压器的准确性超过2%。在CNSE和CNSS数据集上,所有三个变压器都借助ConcreteGraph,均可超越当前最先进的方法,概念交互图(CIG)。
translated by 谷歌翻译
图形神经网络(GNN)由于从图形结构数据中学习表示能力而引起了很多关注。尽管GNN在许多域中成功地应用了,但GNN的优化程度较低,并且在节点分类的性能很大程度上受到了长尾节点学位分布的影响。本文着重于通过归一化提高GNN的性能。详细说明,通过研究图中的节点度的长尾巴分布,我们提出了一种新颖的GNN归一化方法,该方法称为RESNORM(\ textbf {res}将长尾巴分布纳入正常分布,通过\ textbf {norm} alization)。 RESNOR的$比例$操作重塑节点标准偏差(NSTD)分布,以提高尾部节点的准确性(\ textit {i}。\ textit {e}。,低度节点)。我们提供了理论解释和经验证据,以理解上述$ scale $的机制。除了长期的分销问题外,过度光滑也是困扰社区的基本问题。为此,我们分析了标准偏移的行为,并证明了标准移位是重量矩阵上的预处理,从而增加了过度平滑的风险。考虑到过度光滑的问题,我们为Resnorm设计了一个$ Shift $操作,以低成本的方式模拟了特定于学位的参数策略。广泛的实验验证了重新分类对几个节点分类基准数据集的有效性。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译