The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.
translated by 谷歌翻译
Adversarial attacks can easily fool object recognition systems based on deep neural networks (DNNs). Although many defense methods have been proposed in recent years, most of them can still be adaptively evaded. One reason for the weak adversarial robustness may be that DNNs are only supervised by category labels and do not have part-based inductive bias like the recognition process of humans. Inspired by a well-known theory in cognitive psychology -- recognition-by-components, we propose a novel object recognition model ROCK (Recognizing Object by Components with human prior Knowledge). It first segments parts of objects from images, then scores part segmentation results with predefined human prior knowledge, and finally outputs prediction based on the scores. The first stage of ROCK corresponds to the process of decomposing objects into parts in human vision. The second stage corresponds to the decision process of the human brain. ROCK shows better robustness than classical recognition models across various attack settings. These results encourage researchers to rethink the rationality of currently widely-used DNN-based object recognition models and explore the potential of part-based models, once important but recently ignored, for improving robustness.
translated by 谷歌翻译
Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indistinguishable in individual shape, statistical distribution and prediction label between members and non-members. The experimental results show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss. Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks. For example, the inversion error is raised about 4+ times on the Facescrub530 classifier, and the attribute inference accuracy drops significantly when PURIFIER is deployed in our experiment.
translated by 谷歌翻译
This paper introduces a structure-deformable land-air robot which possesses both excellent ground driving and flying ability, with smooth switching mechanism between two modes. The elaborate coupled dynamics model of the proposed robot is established, including rotors, chassis, especially the deformable structures. Furthermore, taking fusion locomotion and complex near-ground situations into consideration, a model based controller is designed for landing and mode switching under various harsh conditions, in which we realise the cooperation between fused two motion modes. The entire system is implemented in ADAMS/Simulink simulation and in practical. We conduct experiments under various complex scenarios. The results show our robot can accomplish land-air switching swiftly and smoothly, and the designed controller can effectively improve the landing flexibility and reliability.
translated by 谷歌翻译
在过去的十年中,AI AID毒品发现(AIDD)的计算方法和数据集策划的繁荣发展。但是,现实世界中的药物数据集经常表现出高度不平衡的分布,这在很大程度上被当前的文献忽略了,但可能会严重损害机器学习应用程序的公平性和概括。在这一观察结果的激励下,我们介绍了Imdrug,这是一个全面的基准标准,其开源python库由4个不平衡设置,11个AI-Ready数据集,54个学习任务和16种为不平衡学习量身定制的基线算法。它为涵盖广泛的药物发现管道(例如分子建模,药物靶标相互作用和逆合合成)的问题和解决方案提供了可访问且可定制的测试床。我们通过新的评估指标进行广泛的实证研究,以证明现有算法在数据不平衡情况下无法解决药物和药物挑战。我们认为,Imdrug为未来的研究和发展开辟了途径,在AIDD和深度不平衡学习的交集中对现实世界中的挑战开辟了道路。
translated by 谷歌翻译
知识图(kg)是近年来突出的知识表示形式。因为它集中在名义实体及其关系上,所以传统的知识图本质上是静态和百科全书。在此基础上,事件知识图(事件kg)通过文本处理对时间和空间动力进行建模,以促进下游应用程序,例如提问,建议和智能搜索。另一方面,现有的KG研究主要集中在文本处理和静态事实上,而忽略了照片,电影和预训练的神经网络中包含的大量动态行为信息。此外,没有努力将行为智能信息包括到深入强化学习(DRL)和机器人学习的知识图中。在本文中,我们提出了一种新颖的动态知识和技能图(KSG),然后我们基于CN-DBPEDIA开发了基本和特定的KSG。节点分为实体和属性节点,其中包含代理,环境和技能(DRL策略或策略表示)的实体节点,以及包含实体描述,预训练网络和离线数据集的属性节点。 KSG可以在各种环境中搜索不同代理的技能,并提供可转移的信息以获取新技能。这是我们意识到的第一项研究,研究了动态的KSG,以进行技能检索和学习。新技能学习的广泛实验结果表明,KSG提高了新的技能学习效率。
translated by 谷歌翻译
在大多数现实世界中的推荐方案中,多种行为(例如,单击,添加到购物车,采购等)的多类型,这对于学习用户的多方面偏好是有益的。由于多种类型的行为明确表现出依赖性,因此有效地对复杂行为依赖性建模对于多行为预测至关重要。最先进的多行为模型以所有历史互动为输入都没有区别地学习行为依赖性。但是,不同的行为可能反映了用户偏好的不同方面,这意味着某些无关的互动可能会像预测目标行为的声音一样发挥作用。为了解决上述局限性,我们向多行为建议介绍了多功能学习。更具体地说,我们提出了一种新颖的粗到五个知识增强的多功能学习(CKML)框架,以学习不同行为的共享和特定于行为的利益。 CKML引入了两个高级模块,即粗粒兴趣提取(CIE)和细粒度的行为相关性(FBC),它们共同起作用以捕获细粒度的行为依赖性。 CIE使用知识感知信息来提取每个兴趣的初始表示。 FBC结合了动态路由方案,以在兴趣之间进一步分配每个行为。此外,我们使用自我注意机制在兴趣水平上将不同的行为信息相关联。三个现实世界数据集的经验结果验证了我们模型在利用多行为数据方面的有效性和效率。进一步的实验证明了每个模块的有效性以及多行为数据共享和特定建模范式的鲁棒性和优越性。
translated by 谷歌翻译
合成健康数据在共享数据以支持生物医学研究和创新医疗保健应用的发展时有可能减轻隐私问题。基于机器学习,尤其是生成对抗网络(GAN)方法的现代方法生成的现代方法继续发展并表现出巨大的潜力。然而,缺乏系统的评估框架来基准测试方法,并确定哪些方法最合适。在这项工作中,我们引入了一个可推广的基准测试框架,以评估综合健康数据的关键特征在实用性和隐私指标方面。我们将框架应用框架来评估来自两个大型学术医疗中心的电子健康记录(EHRS)数据的合成数据生成方法。结果表明,共享合成EHR数据存在公用事业私人关系权衡。结果进一步表明,在每个用例中,在所有标准上都没有明确的方法是最好的,这使得为什么需要在上下文中评估合成数据生成方法。
translated by 谷歌翻译
密集的视频字幕(DVC)的任务旨在为一个视频中的多个事件制作带有时间戳的字幕。语义信息对于DVC的本地化和描述都起着重要作用。我们提出了基于编码编码框架的语义辅助密集的视频字幕模型。在编码阶段,我们设计了一个概念检测器来提取语义信息,然后将其与多模式的视觉特征融合在一起,以充分代表输入视频。在解码阶段,我们设计了一个与本地化和字幕的分类头,以提供语义监督。我们的方法在DVC评估指标下对Youmakeup数据集进行了重大改进,并在PIC 4TH挑战的化妆密集视频字幕(MDVC)任务中实现了高性能。
translated by 谷歌翻译