Crowd counting is usually handled in a density map regression fashion, which is supervised via a L2 loss between the predicted density map and ground truth. To effectively regulate models, various improved L2 loss functions have been proposed to find a better correspondence between predicted density and annotation positions. In this paper, we propose to predict the density map at one resolution but measure the density map at multiple resolutions. By maximizing the posterior probability in such a setting, we obtain a log-formed multi-resolution L2-difference loss, where the traditional single-resolution L2 loss is its particular case. We mathematically prove it is superior to a single-resolution L2 loss. Without bells and whistles, the proposed loss substantially improves several baselines and performs favorably compared to state-of-the-art methods on four crowd counting datasets, ShanghaiTech A & B, UCF-QNRF, and JHU-Crowd++.
translated by 谷歌翻译
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels. Recently, two-stage self-training methods have achieved significant improvements by self-generating pseudo labels and self-refining anomaly scores with these labels. As the pseudo labels play a crucial role, we propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training. Specifically, we first design a multi-head classification module (each head serves as a classifier) with a diversity loss to maximize the distribution differences of predicted pseudo labels across heads. This encourages the generated pseudo labels to cover as many abnormal events as possible. We then devise an iterative uncertainty pseudo label refinement strategy, which improves not only the initial pseudo labels but also the updated ones obtained by the desired classifier in the second stage. Extensive experimental results demonstrate the proposed method performs favorably against state-of-the-art approaches on the UCF-Crime, TAD, and XD-Violence benchmark datasets.
translated by 谷歌翻译
Crowd localization aims to predict the spatial position of humans in a crowd scenario. We observe that the performance of existing methods is challenged from two aspects: (i) ranking inconsistency between test and training phases; and (ii) fixed anchor resolution may underfit or overfit crowd densities of local regions. To address these problems, we design a supervision target reassignment strategy for training to reduce ranking inconsistency and propose an anchor pyramid scheme to adaptively determine the anchor density in each image region. Extensive experimental results on three widely adopted datasets (ShanghaiTech A\&B, JHU-CROWD++, UCF-QNRF) demonstrate the favorable performance against several state-of-the-art methods.
translated by 谷歌翻译
对成对比较的排名聚集在选举,体育比赛,建议和信息检索中表现出了令人鼓舞的结果。但是,与众多有关计算和统计特征的研究工作相反,对这种算法的安全问题几乎没有关注。在巨额利润的推动下,潜在的对手具有强大的动力和动力来操纵排名清单。同时,文献中没有很好地研究等级聚集方法的内在脆弱性。为了充分了解可能的风险,我们专注于有目的的对手,他们希望通过修改本文中的成对数据来指定汇总结果。从动力学系统的角度来看,具有目标排名列表的攻击行为是属于对手和受害者组成的固定点。为了执行目标攻击,我们将对手和受害者之间的相互作用作为游戏理论框架,由两个连续的操作员组成,同时建立了NASH平衡。然后,构建了针对Hodgerank和RankCentrality的两个程序,以产生原始数据的修改。此外,我们证明,一旦对手掌握了完整的信息,受害者将产生目标排名列表。值得注意的是,所提出的方法允许对手只保留不完整的信息或不完美的反馈并执行有目的的攻击。一系列玩具模拟和几个现实世界数据实验证明了建议的目标攻击策略的有效性。这些实验结果表明,所提出的方法可以实现攻击者的目标,即扰动排名列表的领先候选人是对手指定的。
translated by 谷歌翻译
引用视频对象细分旨在分割给定语言表达式所引用的对象。现有作品通常需要压缩视频bitstream在分割之前将其解码为RGB帧,从而增加了计算和存储要求,并最终减慢了推断。这可能会妨碍其在现实世界计算资源有限的场景中的应用,例如自动驾驶汽车和无人机。为了减轻此问题,在本文中,我们探讨了压缩视频的引用对象细分任务,即原始视频数据流。除了视频引用对象分割任务本身的固有难度外,从压缩视频中获得歧视性表示也很具有挑战性。为了解决这个问题,我们提出了一个多发网络,该网络由双路线双注意模块和一个基于查询的跨模式变压器模块组成。具体而言,双路线双意见模块旨在从三种模态的压缩数据中提取有效表示,即i框架,运动矢量和残留。基于查询的跨模式变压器首先对语言和视觉方式之间的相关性进行建模,然后使用融合的多模式特征来指导对象查询以生成内容感知的动态内核并预测最终的分割掩码。与以前的作品不同,我们建议只学习一个内核,因此,它可以删除现有方法的复杂后掩模匹配程序。在三个具有挑战性的数据集上进行的广泛有希望的实验结果表明,与几种用于处理RGB数据的最新方法相比,我们的方法的有效性。源代码可在以下网址获得:https://github.com/dexianghong/manet。
translated by 谷歌翻译
视频标题旨在根据内容生成自然语言描述,其中表示学习起到至关重要的作用。现有方法主要通过对地理文本的生成标题的字词比较来在监督学习框架内开发,而不会完全利用语言语义。在这项工作中,我们提出了一个分层模块化网络,在生成字幕之前从三个级别桥接视频表示和语言语义。特别是,层次结构由以下组成:(i)实体级别,其突出显示最有可能在字幕中提及的对象。 (ii)谓词级别,它学习在突出显示的对象上调节的行动,并由标题中的谓词进行监督。 (iii)句子级别,了解全局语义表示,并受到整个标题的监督。每个级别由一个模块实现。广泛的实验结果表明,该方法对两个广泛使用的基准测试的最先进模型有利地表现出:MSVD 104.0%和苹果酒评分中的MSR-VTT 51.5%。
translated by 谷歌翻译
在本文中,我们的目标是提供对半监督(SS)因果推理的一般性和完全理解治疗效果。具体而言,我们考虑两个这样的估计值:(a)平均治疗效果和(b)定量处理效果,作为原型案例,在SS设置中,其特征在于两个可用的数据集:(i)标记的数据集大小$ N $,为响应和一组高维协变量以及二元治疗指标提供观察。 (ii)一个未标记的数据集,大小超过$ n $,但未观察到的响应。使用这两个数据集,我们开发了一个SS估计系列,该系列是:(1)更强大,并且(2)比其监督对应力更高的基于标记的数据集。除了通过监督方法可以实现的“标准”双重稳健结果(在一致性方面),我们还在正确指定模型中的倾向得分,我们进一步建立了我们SS估计的根本-N一致性和渐近常态。没有需要涉及的特定形式的滋扰职能。这种改善的鲁棒性来自使用大规模未标记的数据,因此通常不能在纯粹监督的环境中获得。此外,只要正确指定所有滋扰函数,我们的估计值都显示为半参数效率。此外,作为滋扰估计器的说明,我们考虑逆概率加权型核平滑估计,涉及未知的协变量转换机制,并在高维情景新颖的情况下建立其统一的收敛速率,这应该是独立的兴趣。两种模拟和实际数据的数值结果验证了我们对其监督对应物的优势,了解鲁棒性和效率。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译