We propose a new neural network design paradigm Reversible Column Network (RevCol). The main body of RevCol is composed of multiple copies of subnetworks, named columns respectively, between which multi-level reversible connections are employed. Such architectural scheme attributes RevCol very different behavior from conventional networks: during forward propagation, features in RevCol are learned to be gradually disentangled when passing through each column, whose total information is maintained rather than compressed or discarded as other network does. Our experiments suggest that CNN-style RevCol models can achieve very competitive performances on multiple computer vision tasks such as image classification, object detection and semantic segmentation, especially with large parameter budget and large dataset. For example, after ImageNet-22K pre-training, RevCol-XL obtains 88.2% ImageNet-1K accuracy. Given more pre-training data, our largest model RevCol-H reaches 90.0% on ImageNet-1K, 63.8% APbox on COCO detection minival set, 61.0% mIoU on ADE20k segmentation. To our knowledge, it is the best COCO detection and ADE20k segmentation result among pure (static) CNN models. Moreover, as a general macro architecture fashion, RevCol can also be introduced into transformers or other neural networks, which is demonstrated to improve the performances in both computer vision and NLP tasks. We release code and models at https://github.com/megvii-research/RevCol
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Medical images play an important role in clinical applications. Multimodal medical images could provide rich information about patients for physicians to diagnose. The image fusion technique is able to synthesize complementary information from multimodal images into a single image. This technique will prevent radiologists switch back and forth between different images and save lots of time in the diagnostic process. In this paper, we introduce a novel Dilated Residual Attention Network for the medical image fusion task. Our network is capable to extract multi-scale deep semantic features. Furthermore, we propose a novel fixed fusion strategy termed Softmax-based weighted strategy based on the Softmax weights and matrix nuclear norm. Extensive experiments show our proposed network and fusion strategy exceed the state-of-the-art performance compared with reference image fusion methods on four commonly used fusion metrics.
translated by 谷歌翻译
Federated learning (FL) is a promising approach to enable the future Internet of vehicles consisting of intelligent connected vehicles (ICVs) with powerful sensing, computing and communication capabilities. We consider a base station (BS) coordinating nearby ICVs to train a neural network in a collaborative yet distributed manner, in order to limit data traffic and privacy leakage. However, due to the mobility of vehicles, the connections between the BS and ICVs are short-lived, which affects the resource utilization of ICVs, and thus, the convergence speed of the training process. In this paper, we propose an accelerated FL-ICV framework, by optimizing the duration of each training round and the number of local iterations, for better convergence performance of FL. We propose a mobility-aware optimization algorithm called MOB-FL, which aims at maximizing the resource utilization of ICVs under short-lived wireless connections, so as to increase the convergence speed. Simulation results based on the beam selection and the trajectory prediction tasks verify the effectiveness of the proposed solution.
translated by 谷歌翻译
RGB热点对象检测(SOD)结合了两个光谱,以分段图像中的视觉明显区域。大多数现有方法都使用边界图来学习锋利的边界。这些方法忽略了孤立的边界像素与其他自信像素之间的相互作用,从而导致了次优性能。为了解决这个问题,我们为基于SWIN Transformer的RGB-T SOD提出了一个职位感知关系学习网络(PRLNET)。 PRLNET探索像素之间的距离和方向关系,以增强阶层内的紧凑性和类间的分离,从而产生具有清晰边界和均匀区域的显着对象掩模。具体而言,我们开发了一个新颖的签名距离辅助模块(SDMAM)来改善编码器特征表示,该模块考虑了边界邻域中不同像素的距离关系。然后,我们使用定向字段(FRDF)设计一种功能改进方法,该方法通过利用明显对象内部的功能来纠正边界邻域的特征。 FRDF利用对象像素之间的方向信息有效地增强了显着区域的阶层紧凑性。此外,我们构成了一个纯变压器编码器 - 模块网络,以增强RGB-T SOD的多光谱特征表示。最后,我们对三个公共基准数据集进行了定量和定性实验。结果表明,我们所提出的方法的表现优于最新方法。
translated by 谷歌翻译
基于骨架的动作识别吸引了其对照明条件的计算效率和鲁棒性的关注。现有的基于骨架的动作识别方法通常是作为单次分类任务而不充分利用动作之间的语义关系的一项单次分类任务。例如,“胜利标志”和“拇指”是手势的两个动作,其主要区别在于手的运动。该信息是从分类的动作类别编码的不可知论,但可以在动作的语言描述中揭示。因此,在培训中利用动作语言描述可能会受益于代表性学习。在这项工作中,我们提出了一种基于骨架的动作识别的语言监督培训(LST)方法。更具体地说,我们采用大型语言模型作为知识引擎来为身体部件的动作运动提供文本描述,并通过利用文本编码器来生成特征向量来为不同的身体部位生成特征向量并监督多模式训练方案并监督动作表示学习的骨架编码器。实验表明,我们提出的LST方法在没有推理时没有额外的计算成本的情况下,对各种基线模型实现了明显的改进。 LST在基于流行的骨架的动作识别基准上实现了新的最新技术,包括NTU RGB+D,NTU RGB+D 120和NW-UCLA。该代码可以在https://github.com/martinxm/lst上找到。
translated by 谷歌翻译
车辆到所有(V2X)网络已使自主驾驶中的合作感达到了协作感,这是对独立情报的根本缺陷的有前途的解决方案,包括盲区和远距离感知。但是,缺乏数据集严重阻碍了协作感知算法的发展。在这项工作中,我们发布了海豚:用于协作感知的数据集,可以使和谐且相互联系的自动驾驶,这是一个新的模拟大规模的各种大规模的各种赛车多模式多模式自动驾驶数据集,该数据集为互连为互连的开创性基准平台提供自动驾驶。海豚在六个维度上优于当前数据集:从车辆和道路侧单元(RSU)(RSUS)的临时图像和点云,启用车辆到车辆(V2V)和车辆到基础设施(V2I)的协作感知; 6具有动态天气条件的典型场景使各种互连的自动驾驶数据集最多;精心选择的观点,提供关键区域和每个对象的全部覆盖范围; 42376帧和292549个对象,以及相应的3D注释,地理位置和校准,构成了最大的协作知觉数据集;全高清图像和64线激光雷达构建高分辨率数据,并具有足够的详细信息;组织良好的API和开源代码可确保海豚的可扩展性。我们还构建了2D检测,3D检测和关于海豚的多视图协作任务的基准。实验结果表明,通过V2X通信的原始融合方案可以帮助提高精度,并在RSU存在时减少昂贵的LiDAR设备的必要性,这可能会加速相互联系的自动驾驶车辆的普及。现在可以在https://dolphins-dataset.net/上获得海豚。
translated by 谷歌翻译
最近,变形金刚在图像分类中表现出巨大的潜力,并在ImageNet基准测试中建立了最先进的结果。然而,与CNN相比,变压器会缓慢收敛,并且由于缺乏空间电感偏见而容易过度拟合低数据。这种空间电感偏见可能特别有益,因为输入图像的2D结构在变压器中不能很好地保存。在这项工作中,我们提出了空间先验增强的自我注意力(SP-SA),这是为视觉变压器量身定制的香草自我注意力(SA)的新型变体。空间先验(SP)是我们提出的归纳偏见家族,突出了某些空间关系。与卷积归纳偏见不同,被迫专注于硬编码的地方区域,我们提出的SP是由模型本身学到的,并考虑了各种空间关系。具体而言,注意力评分是在每个头部都强调某些空间关系的重点,并且这种学识渊博的空间灶可以彼此互补。基于SP-SA,我们提出了SP-VIT家族,该家族始终优于其他具有相似GFLOPS或参数的VIT模型。我们最大的型号SP-VIT-L与以前的最新模型相比,参数数量降低了近50%(SP-VIT-L 150m VS 271M的CAIT-M-36)在所有Imagenet-1K模型中,在224x224训练,并在384x384分辨率上进行了微调,该分辨率带有额外的数据。
translated by 谷歌翻译
虽然变形金机对视频识别任务的巨大潜力具有较强的捕获远程依赖性的强大能力,但它们经常遭受通过对视频中大量3D令牌的自我关注操作引起的高计算成本。在本文中,我们提出了一种新的变压器架构,称为双重格式,可以有效且有效地对视频识别进行时空关注。具体而言,我们的Dualformer将完全时空注意力分层到双级级联级别,即首先在附近的3D令牌之间学习细粒度的本地时空交互,然后捕获查询令牌之间的粗粒度全局依赖关系。粗粒度全球金字塔背景。不同于在本地窗口内应用时空分解或限制关注计算以提高效率的现有方法,我们本地 - 全球分层策略可以很好地捕获短期和远程时空依赖项,同时大大减少了钥匙和值的数量在注意计算提高效率。实验结果表明,对抗现有方法的五个视频基准的经济优势。特别是,Dualformer在动态-400/600上设置了新的最先进的82.9%/ 85.2%,大约1000g推理拖鞋,比具有相似性能的现有方法至少3.2倍。
translated by 谷歌翻译
广义零射击学习(GZSL)旨在培训一个模型,以在某些输出类别在监督学习过程中未知的情况下对数据样本进行分类。为了解决这一具有挑战性的任务,GZSL利用可见的(源)和看不见的(目标)类的语义信息来弥合所见类和看不见的类之间的差距。自引入以来,已经制定了许多GZSL模型。在这篇评论论文中,我们介绍了有关GZSL的全面评论。首先,我们提供了GZSL的概述,包括问题和挑战。然后,我们为GZSL方法介绍了分层分类,并讨论了每个类别中的代表性方法。此外,我们讨论了GZSL的可用基准数据集和应用程序,以及有关研究差距和未来研究方向的讨论。
translated by 谷歌翻译