近年来,我们的社区中的兴趣重新提高了由表面网格,其体柔性内嵌或表面点云表示的3D对象的形状分析。部分地,通过增加RGBD摄像机的可用性以及计算机愿景,对自主驾驶,医学成像和机器人的应用来刺激这种兴趣。在这些设置中,频谱坐标由于能够以质量不变于等距变换而与定性不变的方式结合局部和全局形状属性,所示的形状表示的承诺。然而,令人惊讶的是,这种坐标迄今为止通常仅被认为是局部表面位置或衍生信息。在本文中,我们建议用内侧(物体宽度)信息配备光谱坐标,以便丰富它们。关键思想是通过邻接矩阵的权重耦合共享内侧球的曲面点。我们使用这个想法和计算它的算法开发一个光谱功能。物体宽度和内侧耦合的掺入具有直接的益处,如我们对象分类,对象分割和表面点对应的实验所示。
translated by 谷歌翻译
人类在感知幻觉纲要方面非常出色。我们随时能够在提供包含连接外观的破碎碎片的图像时完成轮廓,形状,场景,甚至不均匀的对象。在视觉科学中,这种能力在很大程度上通过感知分组解释:人类视觉中的基础集进程,描述了如何分组分离的元素。在本文中,我们重新审视了一种称为随机完成领域(SCFS)的算法,该算法机械化一套这样的工艺 - 良好的连续性,闭合和接近 - 通过轮廓完成。本文实现了SCF算法的现代化模型,并在图像编辑框架中使用它提出了新的方法来完成碎片的轮廓。我们展示了SCF算法如何合理地模仿人类感知。我们使用SCF完成的轮廓作为染色的指南,并表明我们的指南提高了最先进的模型的性能。此外,我们表明SCF有助于在高噪声环境中找到边缘。总体而言,我们所描述的算法类似于人类视觉系统中的一个重要机制,并提供了一种新颖的计算机视觉模型可以从中受益的新框架。
translated by 谷歌翻译
机器学习驱动的医学图像分割已成为医学图像分析的标准。然而,深度学习模型易于过度自信预测。这导致了重新关注医学成像和更广泛的机器学习社区中的校准预测。校准预测是标签概率的估计,其对应于置信度的标签的真正预期值。这种校准的预测在一系列医学成像应用中具有效用,包括在不确定性和主动学习系统下的手术规划。同时,它通常是对许多医疗应用的实际重视的准确体积测量。这项工作调查了模型校准和体积估计之间的关系。我们在数学上和经验上展示,如果每个图像校准预测器,我们可以通过期望每像素/图像的体素的概率得分来获得正确的体积。此外,我们表明校准分类器的凸组合保持体积估计,但不保留校准。因此,我们得出结论,具有校准的预测因子是足够但不是必需的来获得体积的无偏估计。我们验证了我们对18种不同(校准的)培训策略的主题验证了我们关于Brats 2018的胶质瘤体积估计的任务的集合,以及Isles 2018数据集的缺血性卒中病变估计。
translated by 谷歌翻译
标记数据是大多数自然语言处理任务的基础。但是,标记数据很困难,并且通常对正确的数据标签应该是什么不同的有效信念。到目前为止,数据集创建者已承认注释主观性,但在注释过程中没有主动管理它。这导致部分主观的数据集未能提供明确的下游使用。要解决此问题,我们提出了两个对比的数据注释范式。描述性范式鼓励注释主观性,而规定的范式则劝阻。描述性注释允许对不同信念进行测量和建模,而规定的注释使得能够培训持续应用一个信仰的模型。我们讨论实施宗旨的福利和挑战,并争辩说,数据集创建者应该明确瞄准一个或另一个,以促进其数据集的预期使用。最后,我们设计了一个注释实验,以说明两种范例之间的对比。
translated by 谷歌翻译
放射线学使用定量医学成像特征来预测临床结果。目前,在新的临床应用中,必须通过启发式试验和纠正过程手动完成各种可用选项的最佳放射组方法。在这项研究中,我们提出了一个框架,以自动优化每个应用程序的放射线工作流程的构建。为此,我们将放射线学作为模块化工作流程,并为每个组件包含大量的常见算法。为了优化每个应用程序的工作流程,我们使用随机搜索和结合使用自动化机器学习。我们在十二个不同的临床应用中评估我们的方法,从而在曲线下导致以下区域:1)脂肪肉瘤(0.83); 2)脱粘型纤维瘤病(0.82); 3)原发性肝肿瘤(0.80); 4)胃肠道肿瘤(0.77); 5)结直肠肝转移(0.61); 6)黑色素瘤转移(0.45); 7)肝细胞癌(0.75); 8)肠系膜纤维化(0.80); 9)前列腺癌(0.72); 10)神经胶质瘤(0.71); 11)阿尔茨海默氏病(0.87);和12)头颈癌(0.84)。我们表明,我们的框架具有比较人类专家的竞争性能,优于放射线基线,并且表现相似或优于贝叶斯优化和更高级的合奏方法。最后,我们的方法完全自动优化了放射线工作流的构建,从而简化了在新应用程序中对放射线生物标志物的搜索。为了促进可重复性和未来的研究,我们公开发布了六个数据集,框架的软件实施以及重现这项研究的代码。
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
The devastation caused by the coronavirus pandemic makes it imperative to design automated techniques for a fast and accurate detection. We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling Attention-based Multi-scaled Convolution network (EAMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The Attention module combines contextual with local information, at multiple scales, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EAMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics. Its clinical significance lies in its unprecedented scope in providing low-cost decision-making for patients lacking specialized healthcare at remote locations.
translated by 谷歌翻译
Objective: Imbalances of the electrolyte concentration levels in the body can lead to catastrophic consequences, but accurate and accessible measurements could improve patient outcomes. While blood tests provide accurate measurements, they are invasive and the laboratory analysis can be slow or inaccessible. In contrast, an electrocardiogram (ECG) is a widely adopted tool which is quick and simple to acquire. However, the problem of estimating continuous electrolyte concentrations directly from ECGs is not well-studied. We therefore investigate if regression methods can be used for accurate ECG-based prediction of electrolyte concentrations. Methods: We explore the use of deep neural networks (DNNs) for this task. We analyze the regression performance across four electrolytes, utilizing a novel dataset containing over 290000 ECGs. For improved understanding, we also study the full spectrum from continuous predictions to binary classification of extreme concentration levels. To enhance clinical usefulness, we finally extend to a probabilistic regression approach and evaluate different uncertainty estimates. Results: We find that the performance varies significantly between different electrolytes, which is clinically justified in the interplay of electrolytes and their manifestation in the ECG. We also compare the regression accuracy with that of traditional machine learning models, demonstrating superior performance of DNNs. Conclusion: Discretization can lead to good classification performance, but does not help solve the original problem of predicting continuous concentration levels. While probabilistic regression demonstrates potential practical usefulness, the uncertainty estimates are not particularly well-calibrated. Significance: Our study is a first step towards accurate and reliable ECG-based prediction of electrolyte concentration levels.
translated by 谷歌翻译