最近,基于CNN的RGB-D显着对象检测(SOD)在检测准确性方面取得了显着提高。但是,现有模型通常在效率和准确性方面表现良好。这阻碍了他们在移动设备以及许多实际问题上的潜在应用。在本文中,为了弥合RGB-D SOD的轻质和大型模型之间的准确性差距,一个有效的模块可以极大地提高准确性,但提出了很少的计算。受深度质量是影响准确性的关键因素的启发,我们提出了有效的深度质量启发的功能操纵(DQFM)过程,该过程可以根据深度质量动态滤波深度特征。提出的DQFM求助于低级RGB和深度特征的对齐,以及深度流的整体注意力,以明确控制和增强交叉模式融合。我们嵌入了DQFM,以获得一个称为DFM-NET的有效的轻质RGB-D SOD模型,此外,我们还设计了一个定制的深度骨架和两阶段解码器作为基本零件。 9个RGB-D数据集的广泛实验结果表明,我们的DFM-NET优于最近的有效型号,在CPU上以约20 fps的速度运行,仅8.5mb型号大小,同时快2.9/2.4倍,比6.7/3.1倍,小于6.7/3.1倍最新的最佳型号A2DELE和手机。与非效率模型相比,它还保持最先进的准确性。有趣的是,进一步的统计数据和分析验证了DQFM在没有任何质量标签的各种品质的深度图中的能力。最后但并非最不重要的一点是,我们进一步应用DFM-NET来处理视频SOD(VSOD),与最近的有效模型相比,相当的性能,而比该领域的先前最佳状态的速度/2.3倍/小2.3倍。我们的代码可在https://github.com/zwbx/dfm-net上找到。
translated by 谷歌翻译
伪装的对象检测(COD)旨在识别自然场景中隐藏自己的物体。准确的COD遭受了许多与低边界对比度有关的挑战,并且对象出现(例如对象大小和形状)的较大变化。为了应对这些挑战,我们提出了一种新颖的背景感知跨层次融合网络(C2F-net),该网络融合了上下文感知的跨级特征,以准确识别伪装的对象。具体而言,我们通过注意力诱导的跨融合模块(ACFM)来计算来自多级特征的内容丰富的注意系数,该模块(ACFM)进一步在注意系数的指导下进一步集成了特征。然后,我们提出了一个双分支全局上下文模块(DGCM),以通过利用丰富的全球上下文信息来完善内容丰富的功能表示的融合功能。多个ACFM和DGCM以级联的方式集成,以产生高级特征的粗略预测。粗糙的预测充当了注意力图,以完善低级特征,然后再将其传递到我们的伪装推断模块(CIM)以生成最终预测。我们对三个广泛使用的基准数据集进行了广泛的实验,并将C2F-NET与最新模型(SOTA)模型进行比较。结果表明,C2F-NET是一种有效的COD模型,并且表现出明显的SOTA模型。此外,对息肉细分数据集的评估证明了我们在COD下游应用程序中C2F-NET的有希望的潜力。我们的代码可在以下网址公开获取:https://github.com/ben57882/c2fnet-tscvt。
translated by 谷歌翻译
本文介绍了DGNET,这是一个新颖的深层框架,可利用对象梯度监督的伪装对象检测(COD)。它将任务分为两个连接的分支,即上下文和纹理编码器。必不可少的连接是梯度诱导的过渡,代表上下文和纹理特征之间的软组。从简单但高效的框架中受益,DGNET以很大的利润优于现有的最新COD模型。值得注意的是,我们的高效版本DGNET-S实时运行(80 fps),并获得与尖端模型JCSOD-CVPR $ _ {21} $相当的结果,只有6.82%的参数。应用程序结果还表明,所提出的DGNET在息肉分割,缺陷检测和透明对象分割任务中表现良好。代码将在https://github.com/gewelsji/dgnet上提供。
translated by 谷歌翻译
我们介绍了深度学习时代的首次全面视频息肉细分(VPS)研究。多年来,由于缺乏大规模细粒度分割注释,VPS的发展并没有轻松前进。为了解决此问题,我们首先引入了名为Sun-Seg的高质量逐帧注释数据集,其中包含来自著名的Sun-Database的158,690帧。我们提供具有不同类型的其他注释,即属性,对象掩码,边界,涂鸦和多边形。其次,我们设计了一个简单但有效的基线,称为PNS+,由全局编码器,局部编码器和归一化的自我注意(NS)块组成。全球和本地编码器会收到一个锚固框架和多个连续的帧,以提取长期和短期时空表示,然后由两个NS块逐渐更新。广泛的实验表明,PNS+实现了最佳性能和实时推理速度(170FPS),这使其成为VPS任务的有前途解决方案。第三,我们在Sun-Seg数据集中广泛评估13个代表性息肉/对象分割模型,并提供基于属性的比较。最后,我们讨论了几个开放问题,并为VPS社区提出了可能的研究指示。
translated by 谷歌翻译
伪装的对象检测(COD)旨在检测周围环境的类似模式(例如,纹理,强度,颜色等)的对象,最近吸引了日益增长的研究兴趣。由于伪装对象通常存在非常模糊的边界,如何确定对象位置以及它们的弱边界是具有挑战性的,也是此任务的关键。受到生物视觉感知过程的启发,当人类观察者发现伪装对象时,本文提出了一种名为Errnet的新型边缘的可逆重新校准网络。我们的模型的特点是两种创新设计,即选择性边缘聚集(SEA)和可逆的重新校准单元(RRU),其旨在模拟视觉感知行为,并在潜在的伪装区域和背景之间实现有效的边缘和交叉比较。更重要的是,RRU与现有COD模型相比,具有更全面的信息。实验结果表明,errnet优于三个COD数据集和五个医学图像分割数据集的现有尖端基线。特别是,与现有的Top-1模型SINET相比,ERRNET显着提高了$ \ SIM 6%(平均电子测量)的性能,以显着高速(79.3 FPS),显示ERRNET可能是一般和强大的解决方案COD任务。
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译
Despite excellent performance in image generation, Generative Adversarial Networks (GANs) are notorious for its requirements of enormous storage and intensive computation. As an awesome ''performance maker'', knowledge distillation is demonstrated to be particularly efficacious in exploring low-priced GANs. In this paper, we investigate the irreplaceability of teacher discriminator and present an inventive discriminator-cooperated distillation, abbreviated as DCD, towards refining better feature maps from the generator. In contrast to conventional pixel-to-pixel match methods in feature map distillation, our DCD utilizes teacher discriminator as a transformation to drive intermediate results of the student generator to be perceptually close to corresponding outputs of the teacher generator. Furthermore, in order to mitigate mode collapse in GAN compression, we construct a collaborative adversarial training paradigm where the teacher discriminator is from scratch established to co-train with student generator in company with our DCD. Our DCD shows superior results compared with existing GAN compression methods. For instance, after reducing over 40x MACs and 80x parameters of CycleGAN, we well decrease FID metric from 61.53 to 48.24 while the current SoTA method merely has 51.92. This work's source code has been made accessible at https://github.com/poopit/DCD-official.
translated by 谷歌翻译