Fine-grained visual parsing, including fine-grained part segmentation and fine-grained object recognition, has attracted considerable critical attention due to its importance in many real-world applications, e.g., agriculture, remote sensing, and space technologies. Predominant research efforts tackle these fine-grained sub-tasks following different paradigms, while the inherent relations between these tasks are neglected. Moreover, given most of the research remains fragmented, we conduct an in-depth study of the advanced work from a new perspective of learning the part relationship. In this perspective, we first consolidate recent research and benchmark syntheses with new taxonomies. Based on this consolidation, we revisit the universal challenges in fine-grained part segmentation and recognition tasks and propose new solutions by part relationship learning for these important challenges. Furthermore, we conclude several promising lines of research in fine-grained visual parsing for future research.
translated by 谷歌翻译
Fine-grained visual recognition is to classify objects with visually similar appearances into subcategories, which has made great progress with the development of deep CNNs. However, handling subtle differences between different subcategories still remains a challenge. In this paper, we propose to solve this issue in one unified framework from two aspects, i.e., constructing feature-level interrelationships, and capturing part-level discriminative features. This framework, namely PArt-guided Relational Transformers (PART), is proposed to learn the discriminative part features with an automatic part discovery module, and to explore the intrinsic correlations with a feature transformation module by adapting the Transformer models from the field of natural language processing. The part discovery module efficiently discovers the discriminative regions which are highly-corresponded to the gradient descent procedure. Then the second feature transformation module builds correlations within the global embedding and multiple part embedding, enhancing spatial interactions among semantic pixels. Moreover, our proposed approach does not rely on additional part branches in the inference time and reaches state-of-the-art performance on 3 widely-used fine-grained object recognition benchmarks. Experimental results and explainable visualizations demonstrate the effectiveness of our proposed approach. The code can be found at https://github.com/iCVTEAM/PART.
translated by 谷歌翻译
Over the past few years, developing a broad, universal, and general-purpose computer vision system has become a hot topic. A powerful universal system would be capable of solving diverse vision tasks simultaneously without being restricted to a specific problem or a specific data domain, which is of great importance in practical real-world computer vision applications. This study pushes the direction forward by concentrating on the million-scale multi-domain universal object detection problem. The problem is not trivial due to its complicated nature in terms of cross-dataset category label duplication, label conflicts, and the hierarchical taxonomy handling. Moreover, what is the resource-efficient way to utilize emerging large pre-trained vision models for million-scale cross-dataset object detection remains an open challenge. This paper tries to address these challenges by introducing our practices in label handling, hierarchy-aware loss design and resource-efficient model training with a pre-trained large model. Our method is ranked second in the object detection track of Robust Vision Challenge 2022 (RVC 2022). We hope our detailed study would serve as an alternative practice paradigm for similar problems in the community. The code is available at https://github.com/linfeng93/Large-UniDet.
translated by 谷歌翻译
Event cameras, offering high temporal resolutions and high dynamic ranges, have brought a new perspective to address common challenges (e.g., motion blur and low light) in monocular depth estimation. However, how to effectively exploit the sparse spatial information and rich temporal cues from asynchronous events remains a challenging endeavor. To this end, we propose a novel event-based monocular depth estimator with recurrent transformers, namely EReFormer, which is the first pure transformer with a recursive mechanism to process continuous event streams. Technically, for spatial modeling, a novel transformer-based encoder-decoder with a spatial transformer fusion module is presented, having better global context information modeling capabilities than CNN-based methods. For temporal modeling, we design a gate recurrent vision transformer unit that introduces a recursive mechanism into transformers, improving temporal modeling capabilities while alleviating the expensive GPU memory cost. The experimental results show that our EReFormer outperforms state-of-the-art methods by a margin on both synthetic and real-world datasets. We hope that our work will attract further research to develop stunning transformers in the event-based vision community. Our open-source code can be found in the supplemental material.
translated by 谷歌翻译
从大脑的事件驱动和稀疏的尖峰特征中受益,尖峰神经网络(SNN)已成为人工神经网络(ANN)的一种节能替代品。但是,SNNS和ANN之间的性能差距很长一段时间以来一直在延伸SNNS。为了利用SNN的全部潜力,我们研究了SNN中注意机制的影响。我们首先使用插件套件提出了我们的注意力,称为多维关注(MA)。然后,提出了一种新的注意力SNN体系结构,并提出了端到端训练,称为“ ma-snn”,该体系结构分别或同时或同时延伸了沿时间,通道以及空间维度的注意力重量。基于现有的神经科学理论,我们利用注意力重量来优化膜电位,进而以数据依赖性方式调节尖峰响应。 MA以可忽略的其他参数为代价,促进了香草SNN,以实现更稀疏的尖峰活动,更好的性能和能源效率。实验是在基于事件的DVS128手势/步态动作识别和Imagenet-1K图像分类中进行的。在手势/步态上,尖峰计数减少了84.9%/81.6%,任务准确性和能源效率提高了5.9%/4.7%和3.4 $ \ times $/3.2 $ \ times $。在ImagEnet-1K上,我们在单个/4步res-SNN-104上获得了75.92%和77.08%的TOP-1精度,这是SNN的最新结果。据我们所知,这是SNN社区与大规模数据集中的ANN相比,SNN社区取得了可比甚至更好的性能。我们的工作阐明了SNN作为支持SNN的各种应用程序的一般骨干的潜力,在有效性和效率之间取得了巨大平衡。
translated by 谷歌翻译
活动相机是一种新型的生物启发的视觉传感器。当亮度变化超过预设阈值时,传感器会异步生成事件。有效事件的数量直接影响基于事件的任务的性能,例如重建,检测和识别。但是,当在低亮度或缓慢的场景中,事件通常稀疏且伴随着噪声,这对基于事件的任务构成了挑战。为了解决这些挑战,我们提出了一个事件的时间上取样算法,以产生更有效和可靠的事件。我们算法的主要思想是在事件运动轨迹上生成上采样事件。首先,我们通过对比度最大化算法来估计事件运动轨迹,然后通过时间点过程对事件进行更采样。实验结果表明,上采样事件可以提供更有效的信息并改善下游任务的性能,例如提高重建图像的质量并提高对象检测的准确性。
translated by 谷歌翻译
学习综合数据已成为零拍量化(ZSQ)的有希望的方向,其代表低位整数而不访问任何实际数据的神经网络。在本文中,我们在实际数据中观察到阶级内异质性的有趣现象,并表明现有方法未能在其合成图像中保留此属性,这导致有限的性能增加。要解决此问题,我们提出了一种新颖的零射量量化方法,称为IntraQ。首先,我们提出了一种局部对象加强件,该局部对象加强能够以不同的尺度和合成图像的位置定位目标对象。其次,我们引入了边缘距离约束,以形成分布在粗糙区域中的类相关的特征。最后,我们设计了一种软的成立损失,该损耗注射了软的先前标签,以防止合成图像过度接近固定物体。我们的intraQ被证明是在合成图像中提供阶级内的异质性,并且还观察到执行最先进的。例如,与高级ZSQ相比,当MobileNetv1的所有层被量化为4位时,我们的IntraIS获取9.17 \%增加了Imagenet上的前1个精度。代码是https://github.com/viperit/interq。
translated by 谷歌翻译
尽管在许多计算机视觉任务上具有卓越的性能,但深度卷积神经网络众所周知,在具有资源限制的设备上被压缩。大多数现有的网络修剪方法需要艰苦的人类努力和禁止的计算资源,特别是当约束改变时。当需要部署在各种设备上时,这实际上限制了模型压缩的应用。此外,现有的方法仍然受到缺失的理论指导挑战。在本文中,我们提出了一种信息理论启发的自动模型压缩策略。我们的方法背后的原理是信息瓶颈理论,即隐藏的表示应该彼此压缩信息。因此,我们在网络激活中介绍了标准化的Hilbert-Schmidt独立性标准(NHSIC),作为层重要性的稳定和广义指标。当给出某个资源约束时,我们将HSIC指示器与约束将架构搜索问题转换为具有二次约束的线性编程问题。这种问题很容易通过几秒钟的凸优化方法解决。我们还提供严格的证据,揭示优化归一化的HSIC同时最小化不同层之间的相互信息。没有任何搜索过程,我们的方法实现了与最先进的压缩算法相比的更好的压缩权衡。例如,通过Reset-50,我们达到了45.3%的杂志,在想象中有75.75前1个精度。代码是在https://github.com/mac-automl/itpruner/tree/master上的途径。
translated by 谷歌翻译
与可见的摄像机不同的是逐帧记录强度图像的可见摄像机,生物学启发的事件摄像头会产生一系列的异步和稀疏事件,并且延迟较低。在实践中,可见的摄像机可以更好地感知纹理细节和慢动作,而事件摄像机可以没有运动模糊并具有更大的动态范围,从而使它们能够在快速运动和低照明下良好地工作。因此,两个传感器可以相互合作以实现更可靠的对象跟踪。在这项工作中,我们提出了一个大规模可见事件基准(称为Visevent),因为缺乏针对此任务的现实和缩放数据集。我们的数据集由在低照明,高速和背景混乱场景下捕获的820个视频对组成,并将其分为训练和测试子集,每个培训和测试子集分别包含500和320个视频。基于Visevent,我们通过将当前的单模式跟踪器扩展到双模式版本,将事件流转换为事件图像,并构建30多种基线方法。更重要的是,我们通过提出跨模式变压器来进一步构建一种简单但有效的跟踪算法,以在可见光和事件数据之间实现更有效的功能融合。对拟议的Visevent数据集(FE108)和两个模拟数据集(即OTB-DVS和fot-DVS)进行了广泛的实验,验证了我们模型的有效性。数据集和源代码已在我们的项目页面上发布:\ url {https://sites.google.com/view/viseventtrack/}。
translated by 谷歌翻译
深度尖峰神经网络(SNNS)目前由于离散二进制激活和复杂的空间 - 时间动态而导致的基于梯度的方法的优化困难。考虑到Reset的巨大成功在深度学习中,将深入了解剩余学习,这将是自然的。以前的尖峰Reset模仿ANN的标准残留块,并简单地用尖刺神经元取代relu激活层,这遭受了劣化问题,并且很难实施剩余学习。在本文中,我们提出了尖峰元素 - 明智(SEW)RESET,以实现深部SNNS的剩余学习。我们证明SEW RESET可以轻松实现身份映射并克服Spiking Reset的消失/爆炸梯度问题。我们在Imagenet,DVS手势和CIFAR10-DVS数据集中评估我们的SEF RESET,并显示SEW RESNET以准确性和时间步长,最先进的直接训练的SNN。此外,SEW Reset通过简单地添加更多层来实现更高的性能,提供一种培训深舒头的简单方法。为了我们的最佳知识,这是第一次直接训练具有100多层以上的深度SNN。我们的代码可在https://github.com/fangwei123456/spike-element-wore-resnet上获得。
translated by 谷歌翻译