The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Learning-based infrared small object detection methods currently rely heavily on the classification backbone network. This tends to result in tiny object loss and feature distinguishability limitations as the network depth increases. Furthermore, small objects in infrared images are frequently emerged bright and dark, posing severe demands for obtaining precise object contrast information. For this reason, we in this paper propose a simple and effective ``U-Net in U-Net'' framework, UIU-Net for short, and detect small objects in infrared images. As the name suggests, UIU-Net embeds a tiny U-Net into a larger U-Net backbone, enabling the multi-level and multi-scale representation learning of objects. Moreover, UIU-Net can be trained from scratch, and the learned features can enhance global and local contrast information effectively. More specifically, the UIU-Net model is divided into two modules: the resolution-maintenance deep supervision (RM-DS) module and the interactive-cross attention (IC-A) module. RM-DS integrates Residual U-blocks into a deep supervision network to generate deep multi-scale resolution-maintenance features while learning global context information. Further, IC-A encodes the local context information between the low-level details and high-level semantic features. Extensive experiments conducted on two infrared single-frame image datasets, i.e., SIRST and Synthetic datasets, show the effectiveness and superiority of the proposed UIU-Net in comparison with several state-of-the-art infrared small object detection methods. The proposed UIU-Net also produces powerful generalization performance for video sequence infrared small object datasets, e.g., ATR ground/air video sequence dataset. The codes of this work are available openly at \url{https://github.com/danfenghong/IEEE_TIP_UIU-Net}.
translated by 谷歌翻译
This study proposes an improved end-to-end multi-target tracking algorithm that adapts to multi-view multi-scale scenes based on the self-attentive mechanism of the transformer's encoder-decoder structure. A multi-dimensional feature extraction backbone network is combined with a self-built semantic raster map, which is stored in the encoder for correlation and generates target position encoding and multi-dimensional feature vectors. The decoder incorporates four methods: spatial clustering and semantic filtering of multi-view targets, dynamic matching of multi-dimensional features, space-time logic-based multi-target tracking, and space-time convergence network (STCN)-based parameter passing. Through the fusion of multiple decoding methods, muti-camera targets are tracked in three dimensions: temporal logic, spatial logic, and feature matching. For the MOT17 dataset, this study's method significantly outperforms the current state-of-the-art method MiniTrackV2 [49] by 2.2% to 0.836 on Multiple Object Tracking Accuracy(MOTA) metric. Furthermore, this study proposes a retrospective mechanism for the first time, and adopts a reverse-order processing method to optimise the historical mislabeled targets for improving the Identification F1-score(IDF1). For the self-built dataset OVIT-MOT01, the IDF1 improves from 0.948 to 0.967, and the Multi-camera Tracking Accuracy(MCTA) improves from 0.878 to 0.909, which significantly improves the continuous tracking accuracy and scene adaptation. This research method introduces a new attentional tracking paradigm which is able to achieve state-of-the-art performance on multi-target tracking (MOT17 and OVIT-MOT01) tasks.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
Modality representation learning is an important problem for multimodal sentiment analysis (MSA), since the highly distinguishable representations can contribute to improving the analysis effect. Previous works of MSA have usually focused on multimodal fusion strategies, and the deep study of modal representation learning was given less attention. Recently, contrastive learning has been confirmed effective at endowing the learned representation with stronger discriminate ability. Inspired by this, we explore the improvement approaches of modality representation with contrastive learning in this study. To this end, we devise a three-stages framework with multi-view contrastive learning to refine representations for the specific objectives. At the first stage, for the improvement of unimodal representations, we employ the supervised contrastive learning to pull samples within the same class together while the other samples are pushed apart. At the second stage, a self-supervised contrastive learning is designed for the improvement of the distilled unimodal representations after cross-modal interaction. At last, we leverage again the supervised contrastive learning to enhance the fused multimodal representation. After all the contrast trainings, we next achieve the classification task based on frozen representations. We conduct experiments on three open datasets, and results show the advance of our model.
translated by 谷歌翻译
我们介绍了在Neurips'22接受的Chalearn Meta学习系列中的新挑战的设计和基线结果,重点是“跨域”元学习。元学习旨在利用从以前的任务中获得的经验,以有效地解决新任务(即具有更好的性能,较少的培训数据和/或适度的计算资源)。尽管该系列中的先前挑战集中在域内几乎没有学习问题,但目的是有效地学习n-way K-shot任务(即N级培训示例的N班级分类问题),这项竞赛挑战了参与者的解决方案。从各种领域(医疗保健,生态学,生物学,制造业等)提出的“任何通道”和“任何镜头”问题,他们是为了人道主义和社会影响而被选为。为此,我们创建了Meta-Album,这是来自10个域的40个图像分类数据集的元数据,从中,我们从中以任何数量的“方式”(在2-20范围内)和任何数量的“镜头”来解释任务”(在1-20范围内)。竞争是由代码提交的,在Codalab挑战平台上进行了完全盲目测试。获奖者的代码将是开源的,从而使自动化机器学习解决方案的部署可以在几个域中进行几次图像分类。
translated by 谷歌翻译
尽管深度神经网络能够在各种任务上实现优于人类的表现,但他们臭名昭著,因为他们需要大量的数据和计算资源,将其成功限制在可用的这些资源的领域。金属学习方法可以通过从相关任务中转移知识来解决此问题,从而减少学习新任务所需的数据和计算资源的数量。我们组织了元数据竞赛系列,该系列为世界各地的研究小组提供了创建和实验评估实际问题的新元学习解决方案的机会。在本文中,我们在竞争组织者和排名最高的参与者之间进行了合作,我们描述了竞争的设计,数据集,最佳实验结果以及Neurips 2021挑战中最高的方法,这些方法吸引了15进入最后阶段的活跃团队(通过表现优于基线),在反馈阶段进行了100多次代码提交。顶级参与者的解决方案是开源的。汲取的经验教训包括学习良好的表示对于有效的转移学习至关重要。
translated by 谷歌翻译
现代的高性能语义分割方法采用沉重的主链和扩张的卷积来提取相关特征。尽管使用上下文和语义信息提取功能对于分割任务至关重要,但它为实时应用程序带来了内存足迹和高计算成本。本文提出了一种新模型,以实现实时道路场景语义细分的准确性/速度之间的权衡。具体来说,我们提出了一个名为“比例吸引的条带引导特征金字塔网络”(s \ textsuperscript {2} -fpn)的轻巧模型。我们的网络由三个主要模块组成:注意金字塔融合(APF)模块,比例吸引条带注意模块(SSAM)和全局特征Upsample(GFU)模块。 APF采用了注意力机制来学习判别性多尺度特征,并有助于缩小不同级别之间的语义差距。 APF使用量表感知的关注来用垂直剥离操作编码全局上下文,并建模长期依赖性,这有助于将像素与类似的语义标签相关联。此外,APF还采用频道重新加权块(CRB)来强调频道功能。最后,S \ TextSuperScript {2} -fpn的解码器然后采用GFU,该GFU用于融合APF和编码器的功能。已经对两个具有挑战性的语义分割基准进行了广泛的实验,这表明我们的方法通过不同的模型设置实现了更好的准确性/速度权衡。提出的模型已在CityScapes Dataset上实现了76.2 \%miou/87.3fps,77.4 \%miou/67fps和77.8 \%miou/30.5fps,以及69.6 \%miou,71.0 miou,71.0 \%miou,和74.2 \%\%\%\%\%\%。 miou在Camvid数据集上。这项工作的代码将在\ url {https://github.com/mohamedac29/s2-fpn提供。
translated by 谷歌翻译
Coflow是最近提出的网络抽象,以帮助提高数据并行计算作业的通信性能。在多阶段作业中,每个作业包括多个Coflows,由定向的非循环图(DAG)表示。有效地调度Coflows对于提高数据中心中的数据并行计算性能至关重要。与手动调度启发式相比,现有的工作Deepweave [1]利用强化学习(RL)框架自动生成高效的CoFlow调度策略。它采用图形神经网络(GNN)来编码一组嵌入向量中的作业信息,并将包含整个作业信息的平面嵌入载体馈送到策略网络。然而,这种方法的可扩展性差,因为它无法应对由任意尺寸和形状的DAG表示的作业,这需要大型策略网络来处理难以训练的高维嵌入载体。在本文中,我们首先利用了一条定向的无循环图神经网络(DAGNN)来处理输入并提出一种新型流水线-DAGNN,其可以有效地加速DAGNN的特征提取过程。接下来,我们馈送由可调度的Coflows组成的嵌入序列,而不是将所有Coflows的平面嵌入到策略网络上,并输出优先级序列,这使得策略网络的大小仅取决于特征的维度而不是产品的维度作业的DAG中的节点数量和节点数量,提高优先级调度策略的准确性,我们将自我注意机制纳入深度RL模型,以捕获嵌入序列不同部分之间的交互,以使输出优先级进行输出优先级分数相关。基于此模型,我们开发了一种用于在线多级作业的Coflow调度算法。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译