Over the past few years, developing a broad, universal, and general-purpose computer vision system has become a hot topic. A powerful universal system would be capable of solving diverse vision tasks simultaneously without being restricted to a specific problem or a specific data domain, which is of great importance in practical real-world computer vision applications. This study pushes the direction forward by concentrating on the million-scale multi-domain universal object detection problem. The problem is not trivial due to its complicated nature in terms of cross-dataset category label duplication, label conflicts, and the hierarchical taxonomy handling. Moreover, what is the resource-efficient way to utilize emerging large pre-trained vision models for million-scale cross-dataset object detection remains an open challenge. This paper tries to address these challenges by introducing our practices in label handling, hierarchy-aware loss design and resource-efficient model training with a pre-trained large model. Our method is ranked second in the object detection track of Robust Vision Challenge 2022 (RVC 2022). We hope our detailed study would serve as an alternative practice paradigm for similar problems in the community. The code is available at https://github.com/linfeng93/Large-UniDet.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Despite the remarkable success of existing methods for few-shot segmentation, there remain two crucial challenges. First, the feature learning for novel classes is suppressed during the training on base classes in that the novel classes are always treated as background. Thus, the semantics of novel classes are not well learned. Second, most of existing methods fail to consider the underlying semantic gap between the support and the query resulting from the representative bias by the scarce support samples. To circumvent these two challenges, we propose to activate the discriminability of novel classes explicitly in both the feature encoding stage and the prediction stage for segmentation. In the feature encoding stage, we design the Semantic-Preserving Feature Learning module (SPFL) to first exploit and then retain the latent semantics contained in the whole input image, especially those in the background that belong to novel classes. In the prediction stage for segmentation, we learn an Self-Refined Online Foreground-Background classifier (SROFB), which is able to refine itself using the high-confidence pixels of query image to facilitate its adaptation to the query image and bridge the support-query semantic gap. Extensive experiments on PASCAL-5$^i$ and COCO-20$^i$ datasets demonstrates the advantages of these two novel designs both quantitatively and qualitatively.
translated by 谷歌翻译
Most existing scene text detectors require large-scale training data which cannot scale well due to two major factors: 1) scene text images often have domain-specific distributions; 2) collecting large-scale annotated scene text images is laborious. We study domain adaptive scene text detection, a largely neglected yet very meaningful task that aims for optimal transfer of labelled scene text images while handling unlabelled images in various new domains. Specifically, we design SCAST, a subcategory-aware self-training technique that mitigates the network overfitting and noisy pseudo labels in domain adaptive scene text detection effectively. SCAST consists of two novel designs. For labelled source data, it introduces pseudo subcategories for both foreground texts and background stuff which helps train more generalizable source models with multi-class detection objectives. For unlabelled target data, it mitigates the network overfitting by co-regularizing the binary and subcategory classifiers trained in the source domain. Extensive experiments show that SCAST achieves superior detection performance consistently across multiple public benchmarks, and it also generalizes well to other domain adaptive detection tasks such as vehicle detection.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Sleep stage recognition is crucial for assessing sleep and diagnosing chronic diseases. Deep learning models, such as Convolutional Neural Networks and Recurrent Neural Networks, are trained using grid data as input, making them not capable of learning relationships in non-Euclidean spaces. Graph-based deep models have been developed to address this issue when investigating the external relationship of electrode signals across different brain regions. However, the models cannot solve problems related to the internal relationships between segments of electrode signals within a specific brain region. In this study, we propose a Pearson correlation-based graph attention network, called PearNet, as a solution to this problem. Graph nodes are generated based on the spatial-temporal features extracted by a hierarchical feature extraction method, and then the graph structure is learned adaptively to build node connections. Based on our experiments on the Sleep-EDF-20 and Sleep-EDF-78 datasets, PearNet performs better than the state-of-the-art baselines.
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
多尺度功能已被证明在对象检测方面非常有效,大多数基于Convnet的对象检测器采用特征金字塔网络(FPN)作为利用多尺度功能的基本组件。但是,对于最近提出的基于变压器的对象探测器,直接结合多尺度功能会导致由于处理高分辨率特征的注意机制的高复杂性,因此导致了高度的计算开销。本文介绍了迭代多尺度特征聚合(IMFA) - 一种通用范式,可有效利用基于变压器的对象检测器中的多尺度特征。核心想法是从仅几个关键位置利用稀疏的多尺度特征,并且通过两种新颖的设计实现了稀疏的特征。首先,IMFA重新安排变压器编码器数据管道,因此可以根据检测预测进行迭代更新编码的功能。其次,在先前检测预测的指导下,IMFA稀疏的量表自适应特征可从几个关键点位置进行精制检测。结果,采样的多尺度特征稀疏,但仍然对对象检测非常有益。广泛的实验表明,提出的IMFA在略有计算开销的情况下显着提高了基于变压器的对象检测器的性能。项目页面:https://github.com/zhanggongjie/imfa。
translated by 谷歌翻译
风险评分系统已被广泛地部署在许多应用程序中,这些应用程序根据用户的行为序列将风险分数分配给了。尽管许多具有复杂设计的深度学习方法已经取得了令人鼓舞的结果,但由于公平,解释性和合规性考虑,黑框的性质阻碍了他们的应用。在这些敏感情况下,基于规则的系统被认为是可靠的。但是,构建规则系统是劳动密集型的。专家需要从用户行为序列,基于统计数据的设计规则中找到信息统计信息,并为每个规则分配权重。在本文中,我们弥合了有效但黑色框模型与透明规则模型之间的差距。我们提出了一种两阶段的方法Rudi,该方法将黑框教师模型的知识提炼成基于规则的学生模型。我们设计了一种基于蒙特卡洛树搜索的统计生成方法,该方法可以在第一阶段提供一组信息统计信息。然后,通过模仿教师模型的输出,将统计数据与我们提出的神经逻辑网络组成逻辑规则。我们在三个现实世界公共数据集和一个工业数据集上评估了Rudi,以证明其有效性。
translated by 谷歌翻译
现有的基于匹配的方法通过从像素级内存中检索支持功能执行视频对象细分(VOS),而某些像素可能会遭受内存中缺乏对应关系(即看不见),这不可避免地限制了他们的细分性能。在本文中,我们提出了一个两流网络(TSN)。我们的TSN包含(i)带有常规像素级内存的像素流,以根据其像素级内存检索分割可见像素。 (ii)一个看不见的像素的实例流,其中对实例的整体理解是在动态分割头上以基于目标实例的特征进行条件的。 (iii)一个像素划分模块生成路由图,将两个流的输出嵌入在一起融合在一起。紧凑的实例流有效地提高了看不见的像素的分割精度,同时将两个流与自适应路由图融合在一起,导致整体性能提升。通过广泛的实验,我们证明了我们提出的TSN的有效性,并且还报告了2018年YouTube-VOS的最先进性能为86.1%,在Davis-2017验证案例中为87.5%。
translated by 谷歌翻译