The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Video recognition in an open and dynamic world is quite challenging, as we need to handle different settings such as close-set, long-tail, few-shot and open-set. By leveraging semantic knowledge from noisy text descriptions crawled from the Internet, we focus on the general video recognition (GVR) problem of solving different recognition tasks within a unified framework. The core contribution of this paper is twofold. First, we build a comprehensive video recognition benchmark of Kinetics-GVR, including four sub-task datasets to cover the mentioned settings. To facilitate the research of GVR, we propose to utilize external textual knowledge from the Internet and provide multi-source text descriptions for all action classes. Second, inspired by the flexibility of language representation, we present a unified visual-linguistic framework (VLG) to solve the problem of GVR by an effective two-stage training paradigm. Our VLG is first pre-trained on video and language datasets to learn a shared feature space, and then devises a flexible bi-modal attention head to collaborate high-level semantic concepts under different settings. Extensive results show that our VLG obtains the state-of-the-art performance under four settings. The superior performance demonstrates the effectiveness and generalization ability of our proposed framework. We hope our work makes a step towards the general video recognition and could serve as a baseline for future research. The code and models will be available at https://github.com/MCG-NJU/VLG.
translated by 谷歌翻译
In massive multiple-input multiple-output (MIMO) systems, the user equipment (UE) needs to feed the channel state information (CSI) back to the base station (BS) for the following beamforming. But the large scale of antennas in massive MIMO systems causes huge feedback overhead. Deep learning (DL) based methods can compress the CSI at the UE and recover it at the BS, which reduces the feedback cost significantly. But the compressed CSI must be quantized into bit streams for transmission. In this paper, we propose an adaptor-assisted quantization strategy for bit-level DL-based CSI feedback. First, we design a network-aided adaptor and an advanced training scheme to adaptively improve the quantization and reconstruction accuracy. Moreover, for easy practical employment, we introduce the expert knowledge of data distribution and propose a pluggable and cost-free adaptor scheme. Experiments show that compared with the state-of-the-art feedback quantization method, this adaptor-aided quantization strategy can achieve better quantization accuracy and reconstruction performance with less or no additional cost. The open-source codes are available at https://github.com/zhangxd18/QCRNet.
translated by 谷歌翻译
在各个领域(例如政治,健康和娱乐)中的真实和虚假新闻每天都通过在线社交媒体传播,需要对多个领域进行虚假新闻检测。其中,在政治和健康等特定领域中的虚假新闻对现实世界产生了更严重的潜在负面影响(例如,由Covid-19的错误信息引导的流行病)。先前的研究着重于多域假新闻检测,同样采矿和建模域之间的相关性。但是,这些多域方法遇到了SEESAW问题:某些域的性能通常会以损害其他域的性能而改善,这可能导致在特定领域的表现不满意。为了解决这个问题,我们建议一个用于假新闻检测(DITFEND)的域和实例级传输框架,这可以改善特定目标域的性能。为了传递粗粒域级知识,我们从元学习的角度训练了所有域数据的通用模型。为了传输细粒度的实例级知识并将一般模型调整到目标域,我们在目标域上训练语言模型,以评估每个数据实例在源域中的可传递性,并重新赢得每个实例的贡献。两个数据集上的离线实验证明了Ditfend的有效性。在线实验表明,在现实世界中,Ditfend对基本模型带来了更多改进。
translated by 谷歌翻译
多文件科学摘要(MDSS)旨在为与主题相关的科学论文群生成连贯和简洁的摘要。此任务需要精确理解纸张内容以及对交叉纸关系的准确建模。知识图为文档传达了紧凑且可解释的结构化信息,这使其非常适合内容建模和关系建模。在本文中,我们提出了KGSUM,这是一个MDSS模型,以编码和解码过程中的知识图为中心。具体而言,在编码过程中,提出了两个基于图的模块,以将知识图信息纳入纸张编码,而在解码过程中,我们通过以描述性句子的形式首先生成摘要的知识图,提出了一个两阶段解码器。 ,然后生成最终摘要。经验结果表明,所提出的体系结构对多XSCIENCE数据集的基准进行了实质性改进。
translated by 谷歌翻译
结构重新参数化(REP)方法已在传统的卷积网络上取得了重大的性能提高。大多数当前的REP方法依靠先验知识来选择重新聚集操作。但是,体系结构的性能受到操作类型和先验知识的限制。为了打破这项限制,在这项工作中,设计了改进的重新参数化搜索空间,其中包括更多类型的重新参数操作。具体而言,搜索空间可以进一步提高卷积网络的性能。为了有效地探索该搜索空间,基于神经体系结构搜索(NAS)设计了自动重新参数增强策略,该策略可以搜索出色的重新参数化体系结构。此外,我们可视化体系结构的输出功能,以分析形成重新参数架构的原因。在公共数据集中,我们取得了更好的结果。在与RESNET相同的训练条件下,我们将Resnet-50的准确性提高了Imagenet-1K的1.82%。
translated by 谷歌翻译
在本文中,我们提出了一个名为OcSampler的框架,以探索一个紧凑而有效的视频表示,其中一个短剪辑以获得高效的视频识别。最近的作品宁愿通过根据其重要性选择一个框架作为顺序决策任务的帧采样,而我们呈现了一个专用的学习实例的视频冷凝策略的新范式,以选择仅在单个视频中表示整个视频的信息帧步。我们的基本动机是高效的视频识别任务在于一次地处理整个序列而不是顺序拾取帧。因此,这些策略在一个步骤中与简单而有效的策略网络一起导出从光加权略微脱脂网络。此外,我们以帧编号预算扩展了所提出的方法,使框架能够以尽可能少的帧的高度置信度产生正确的预测。四个基准测试,即ActivityNet,Mini-Kinetics,FCVID,Mini-Sports1M的实验证明了我们在准确性,理论计算费用,实际推理速度方面对先前方法的效果。我们还在不同分类器,采样框架和搜索空间上评估其泛化电量。特别是,我们在ActivityNet上达到76.9%的地图和21.7 GFLOPS,具有令人印象深刻的吞吐量:123.9个视频/ s在单个Titan XP GPU上。
translated by 谷歌翻译
假新闻在各个领域的社交媒体上广泛传播,这导致了政治,灾害和金融等许多方面的现实世界威胁。大多数现有方法专注于单域假新闻检测(SFND),当这些方法应用于多域假新闻检测时,导致不满意的性能。作为新兴领域,多域假新闻检测(MFND)越来越受到关注。但是,数据分布,例如词频率和传播模式,从域变化,即域移位。面对严重领域转变的挑战,现有的假新闻检测技术对于多域场景表现不佳。因此,要求为MFND设计专业型号。在本文中,我们首先为MFND设计了一个带有域名标签的假新闻数据集的基准,即Weibo21,由4,488个假新闻和来自9个不同领域的4,640个真实新闻组成。我们进一步提出了一种通过利用域门来聚合由专家混合提取的多个表示来聚合的多域假新闻检测模型(MDFend)。实验表明,MDFEND可以显着提高多域假新闻检测的性能。我们的数据集和代码可在https://github.com/kennqiang/mdfend-weibo21获得。
translated by 谷歌翻译
迄今为止,纳米级的活细胞成像仍然具有挑战性。尽管超分辨率显微镜方法使得能够在光学分辨率下方的亚细胞结构的可视化,但空间分辨率仍然足够远,对于体内生物分子的结构重建仍然足够远(即24nm厚度的微管纤维)。在这项研究中,我们提出了一种A-Net网络,并显示通过基于劣化模型的DWDC算法组合A-Net DeeD学习网络,可以显着改善由共聚焦显微镜捕获的细胞骨架图像的分辨率。利用DWDC算法构建新数据集并利用A-Net神经网络的特征(即,层数较少),我们成功地消除了噪声和絮凝结构,最初干扰了原始图像中的蜂窝结构,并改善了空间分辨率使用相对较小的数据集10次。因此,我们得出结论,将A-Net神经网络与DWDC方法结合的所提出的算法是一种合适的和普遍的方法,用于从低分辨率图像中严格的生物分子,细胞和器官的结构细节。
translated by 谷歌翻译
相同地形的不同卫星图像的相对辐射归一化(RRN)对于改变检测,对象分类/分割和映射任务是必要的。但是,传统的RRN模型不强大,通过对象变化扰乱,并且RRN模型精确考虑对象变化无法鲁布布地获取无更改集。本文提出了通过潜在变化噪声建模的自动稳健的相对辐射归一化方法。它们利用先验知识,即在相对辐射尺度化下没有变化点具有小尺度噪声,并且在辐射归一化之后,变化点具有大规模的辐射噪声,组合随机期望最大化方法快速且强大地提取No-Change集以学习相对辐射归一化映射映射函数。这使我们的模型在理论上就是关于概率理论和数学扣除的基础。具体地,当我们选择直方图匹配作为与高斯噪声(HM-RRN-RRN-RRN-MOG)混合的相对辐射算法学习方案(HM-RRN-MOG)的相对辐射归一化学习方案,HM-RRN-MOG模型实现了最佳性能。我们的模型具有强大地反对云/雾气/变化的能力。我们的方法自然地为RRN生成一个强大的评估指示器,即No-Change Set Totor Square error。我们将HM-RRN-MOG模型应用于后一种植被/水变化检测任务,这减少了无辐射对比度和NDVI / NDWI对无变化集的差异,产生了一致和可比的结果。我们利用No-Change集合到建筑物变更检测任务中,有效地减少了伪变化并提高了精度。
translated by 谷歌翻译