The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs. However, CCR suffers from two main transitive problems: threshold effect and scene drift. In other words, the causal pairs to be spliced may have a conflicting threshold boundary or scenario. To address these issues, we propose a novel Reliable Causal chain reasoning framework~(ReCo), which introduces exogenous variables to represent the threshold and scene factors of each causal pair within the causal chain, and estimates the threshold and scene contradictions across exogenous variables via structural causal recurrent neural networks~(SRNN). Experiments show that ReCo outperforms a series of strong baselines on both Chinese and English CCR datasets. Moreover, by injecting reliable causal chain knowledge distilled by ReCo, BERT can achieve better performances on four downstream causal-related tasks than BERT models enhanced by other kinds of knowledge.
translated by 谷歌翻译
深度学习模型已在大规模视频基准测试上取得了出色的识别结果。但是,当应用于稀有场景或物体的视频时,它们的性能很差,这主要是由于现有视频数据集的偏见。我们从两个不同的角度解决了这个问题:算法和数据集。从算法的角度来看,我们提出了空间感知的多种偏见(SMAD),它既将明确的偏见都与多种相对的对抗性训练和隐含的偏见以及与空间行动重新重量的模块相结合,从行动方面。为了消除内在的数据集偏差,我们建议OmnideBias有选择地利用Web数据进行联合培训,这可以通过更少的Web数据实现更高的性能。为了验证有效性,我们建立评估协议并对现有数据集的重新分配分配和新的评估数据集进行广泛的实验,该数据集的重点是稀有场景。我们还表明,当转移到其他数据集和任务时,辩护形式可以更好地概括。
translated by 谷歌翻译
深层神经网络最近使用具有高平行性的香草卷积层成功地完成了数字化力。但是,现有的深层方法无法生成具有令人满意的蓝色属性的半半来,并且需要复杂的训练方案。在本文中,我们提出了一种基于多代理深钢筋学习的半强化方法,称为Halftoners,该方法学会了共同的政策来生成高质量的半半突。具体而言,我们将每个二进制像素值的决定视为虚拟代理的动作,该策略由低变义的策略梯度培训。此外,蓝噪性特性是通过新颖的各向异性抑制损失函数来实现的。实验表明,我们的半强化方法在保持速度相对较快的同时产生高质量的半身。
translated by 谷歌翻译
超声(US)广泛用于实时成像,无辐射和便携性的优势。在临床实践中,分析和诊断通常依赖于美国序列,而不是单个图像来获得动态的解剖信息。对于新手来说,这是一项挑战,因为使用患者的足够视频进行练习是临床上不可行的。在本文中,我们提出了一个新颖的框架,以综合高保真美国视频。具体而言,合成视频是通过基于给定驾驶视频的动作来动画源内容图像来生成的。我们的亮点是三倍。首先,利用自我监督学习的优势,我们提出的系统以弱监督的方式进行了培训,以进行关键点检测。然后,这些关键点为处理美国视频中的复杂动态动作提供了重要信息。其次,我们使用双重解码器将内容和纹理学习解除,以有效地减少模型学习难度。最后,我们采用了对抗性训练策略,并采用了GAN损失,以进一步改善生成的视频的清晰度,从而缩小了真实和合成视频之间的差距。我们在具有高动态运动的大型内部骨盆数据集上验证我们的方法。广泛的评估指标和用户研究证明了我们提出的方法的有效性。
translated by 谷歌翻译
Web搜索是人类获取信息的重要方法,但是对于了解网页内容的机器仍然是一个巨大的挑战。在本文中,我们介绍了对网上结构阅读理解(SRC)的任务。鉴于网页和关于它的问题,任务是从网页找到答案。此任务要求系统不仅要了解文本的语义,还需要了解文本的语义,还需要网页的结构。此外,我们提出了一种新的基于Web的结构阅读理解数据集。 WebSRC由400K问答对组成,从6.4K网页收集。与QA对一起,我们的数据集还提供了相应的HTML源代码,屏幕截图和元数据。 WebSRC中的每个问题都需要对网页的某种结构理解来回答,并且答案是网页或是/否的文本跨度。我们评估我们数据集的各种基线,以显示我们的任务难度。我们还研究了结构信息和视觉功能的有用性。我们的数据集和基线已在HTTPS://x-lance.github.io/websrc/上公开提供。
translated by 谷歌翻译
Terahertz频段(0.1---10 THZ)中的无线通信被视为未来第六代(6G)无线通信系统的关键促进技术之一,超出了大量多重输入多重输出(大量MIMO)技术。但是,THZ频率的非常高的传播衰减和分子吸收通常限制了信号传输距离和覆盖范围。从最近在可重构智能表面(RIS)上实现智能无线电传播环境的突破,我们为多跳RIS RIS辅助通信网络提供了一种新型的混合波束形成方案,以改善THZ波段频率的覆盖范围。特别是,部署了多个被动和可控的RIS,以协助基站(BS)和多个单人体用户之间的传输。我们通过利用最新的深钢筋学习(DRL)来应对传播损失的最新进展,研究了BS在BS和RISS上的模拟光束矩阵的联合设计。为了改善拟议的基于DRL的算法的收敛性,然后设计了两种算法,以初始化数字波束形成和使用交替优化技术的模拟波束形成矩阵。仿真结果表明,与基准相比,我们提出的方案能够改善50 \%的THZ通信范围。此外,还表明,我们提出的基于DRL的方法是解决NP-固定光束形成问题的最先进方法,尤其是当RIS辅助THZ通信网络的信号经历多个啤酒花时。
translated by 谷歌翻译
The Transformer is widely used in natural language processing tasks. To train a Transformer however, one usually needs a carefully designed learning rate warm-up stage, which is shown to be crucial to the final performance but will slow down the optimization and bring more hyperparameter tunings. In this paper, we first study theoretically why the learning rate warm-up stage is essential and show that the location of layer normalization matters. Specifically, we prove with mean field theory that at initialization, for the original-designed Post-LN Transformer, which places the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Therefore, using a large learning rate on those gradients makes the training unstable. The warm-up stage is practically helpful for avoiding this problem. On the other hand, our theory also shows that if the layer normalization is put inside the residual blocks (recently proposed as Pre-LN Transformer), the gradients are well-behaved at initialization. This motivates us to remove the warm-up stage for the training of Pre-LN Transformers. We show in our experiments that Pre-LN Transformers without the warm-up stage can reach comparable results with baselines while requiring significantly less training time and hyper-parameter tuning on a wide range of applications.
translated by 谷歌翻译
Cascade is a classic yet powerful architecture that has boosted performance on various tasks. However, how to introduce cascade to instance segmentation remains an open question. A simple combination of Cascade R-CNN and Mask R-CNN only brings limited gain. In exploring a more effective approach, we find that the key to a successful instance segmentation cascade is to fully leverage the reciprocal relationship between detection and segmentation. In this work, we propose a new framework, Hybrid Task Cascade (HTC), which differs in two important aspects: (1) instead of performing cascaded refinement on these two tasks separately, it interweaves them for a joint multi-stage processing; (2) it adopts a fully convolutional branch to provide spatial context, which can help distinguishing hard foreground from cluttered background. Overall, this framework can learn more discriminative features progressively while integrating complementary features together in each stage. Without bells and whistles, a single HTC obtains 38.4% and 1.5% improvement over a strong Cascade Mask R-CNN baseline on MSCOCO dataset. Moreover, our overall system achieves 48.6 mask AP on the test-challenge split, ranking 1st in the COCO 2018 Challenge Object Detection Task. Code is available at: https://github.com/ open-mmlab/mmdetection.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译