本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
图对比度学习已被证明是图形神经网络(GNN)预训练的有效任务。但是,一个关键问题可能会严重阻碍现有作品中的代表权:当前方法创建的积极实例通常会错过图表的关键信息,甚至会错过非法实例(例如分子生成中的非化学意识图)。为了解决此问题,我们建议直接从训练集中的现有图中选择正图实例,该实例最终保持与目标图的合法性和相似性。我们的选择基于某些特定于域的成对相似性测量以及从层次图编码图中的相似性关系的采样。此外,我们开发了一种自适应节点级预训练方法,以动态掩盖节点在图中均匀分布。我们对来自各个域的$ 13 $图形分类和节点分类基准数据集进行了广泛的实验。结果表明,通过我们的策略预先培训的GNN模型可以胜过那些训练有素的从划痕模型以及通过现有方法获得的变体。
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
本文回顾了关于压缩视频质量增强质量的第一个NTIRE挑战,重点是拟议的方法和结果。在此挑战中,采用了新的大型不同视频(LDV)数据集。挑战有三个曲目。Track 1和2的目标是增强HEVC在固定QP上压缩的视频,而Track 3旨在增强X265压缩的视频,以固定的位速率压缩。此外,轨道1和3的质量提高了提高保真度(PSNR)的目标,以及提高感知质量的2个目标。这三个曲目完全吸引了482个注册。在测试阶段,分别提交了12个团队,8支球队和11支球队,分别提交了轨道1、2和3的最终结果。拟议的方法和解决方案衡量视频质量增强的最先进。挑战的首页:https://github.com/renyang-home/ntire21_venh
translated by 谷歌翻译
图像表示对于许多视觉任务至关重要。最近的一项研究,即局部隐式图像函数(LIIF),而不是用2D阵列代替图像,而是将图像表示为连续函数,其中像素值是通过使用相应的坐标作为输入来扩展的。由于其连续的性质,可以为任意规模的图像超分辨率任务采用LIIF,从而为各种提高因素提供了一个有效和有效的模型。但是,Liif通常遭受边缘周围的结构扭曲和响起的伪影,主要是因为所有像素共享相同的模型,因此忽略了图像的局部特性。在本文中,我们提出了一种新颖的自适应局部图像功能(A-LIIF)来减轻此问题。具体而言,我们的A-LIIF由两个主要组成部分组成:编码器和扩展网络。前者捕获了跨尺度的图像特征,而后者通过多个局部隐式图像函数的加权组合进行了连续升级函数。因此,我们的A-LIIF可以更准确地重建高频纹理和结构。多个基准数据集的实验验证了我们方法的有效性。我们的代码可在\ url {https://github.com/leehw-thu/a-liif}上找到。
translated by 谷歌翻译
如何正确对视频序列中的框架间关系进行建模是视频恢复(VR)的重要挑战。在这项工作中,我们提出了一个无监督的流动对准​​序列模型(S2SVR)来解决此问题。一方面,在VR中首次探讨了在自然语言处理领域的序列到序列模型。优化的序列化建模显示了捕获帧之间远程依赖性的潜力。另一方面,我们为序列到序列模型配备了无监督的光流估计器,以最大程度地发挥其潜力。通过我们提出的无监督蒸馏损失对流量估计器进行了训练,这可以减轻数据差异和以前基于流动的方法的降解光流问题的不准确降解。通过可靠的光流,我们可以在多个帧之间建立准确的对应关系,从而缩小了1D语言和2D未对准框架之间的域差异,并提高了序列到序列模型的潜力。 S2SVR在多个VR任务中显示出卓越的性能,包括视频脱张,视频超分辨率和压缩视频质量增强。代码和模型可在https://github.com/linjing7/vr-baseline上公开获得
translated by 谷歌翻译
在时空邻域中利用类似和更清晰的场景补丁对于视频去纹理至关重要。然而,基于CNN的方法显示了捕获远程依赖性和建模非本地自相相似性的限制。在本文中,我们提出了一种新颖的框架,流引导稀疏变压器(FGST),用于视频去掩模。在FGST中,我们定制自我关注模块,流动引导的基于稀疏窗口的多头自我关注(FGSW-MSA)。对于模糊参考帧上的每个$查询$元素,FGSW-MSA享有估计的光流向全局样本的指导,其空间稀疏但与相邻帧中相同的场景补丁对应的高度相关$键$元素。此外,我们介绍了一种反复嵌入(RE)机制,以从过去的框架转移信息并加强远程时间依赖性。综合实验表明,我们提出的FGST优于DVD和GoPro数据集的最先进的(SOTA)方法,甚至在真实视频去纹理中产生更多视觉上令人愉悦的结果。代码和型号将发布给公众。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译