我们描述了一种使用机器人应用程序中常见的一类离散连续因子图进行平滑和映射的通用方法。虽然有公开可用的工具提供灵活且易于使用的接口,以指定和解决以离散或连续图形模型提出的优化问题,但目前尚无类似的一般工具,可以为混合离散性问题提供相同的功能。我们旨在解决这个问题。特别是,我们提供了一个库DC-SAM,将现有的工具扩展为以因子图定义的优化问题,以设置离散模型的设置。我们工作的关键贡献是一种新颖的解决方案,用于有效地回收离散连续优化问题的近似解决方案。我们方法的关键见解是,虽然对连续和离散状态空间的共同推断通常很难,但许多通常遇到的离散连续问题自然可以分为“离散部分”,并且可以轻松地解决的“连续部分” 。利用这种结构,我们以交替的方式优化离散和连续变量。因此,我们提出的工作可以直接表示离散图形模型的直接表示和近似推断。我们还提供了一种方法来恢复离散变量和连续变量的估计值的不确定性。我们通过应用于三个不同的机器人感知应用程序的应用来证明我们的方法的多功能性:点云注册,健壮的姿势图优化以及基于对象的映射和本地化。
translated by 谷歌翻译
在这项工作中,我们介绍了配备有明确性能的第一个初始化方法,该方法适用于姿势图同时定位和映射(SLAM)和旋转平均(RA)问题。 SLAM和旋转平均通常正义为大规模的非渗透点估计问题,具有许多糟糕的本地最小值,可以捕获通常应用的平滑优化方法来解决它们;因此,标准SLAM和RA算法的性能至关重要取决于用于初始化该本地搜索的估计的质量。虽然在文献中出现了SLAM和RA的许多初始化方法,但通常可以获得纯粹的启发式近似值,这使得难以确定是否(或在什么情况下)这些技术可以可靠地部署这些技术。相比之下,在这项工作中,我们研究通过光谱松弛镜头初始化的问题。具体而言,我们推出了SLAM和RA的简单谱弛豫,其形式使我们能够利用经典的线性代数技术(特征向量扰动界限)来控制从我们的光谱估计到(未知)地基实际和该距离作为测量噪声的函数的估计问题的全局最小化器。我们的结果揭示了测量网络在控制估计精度下播放的光谱图 - 理论性能的关键作用;此外,作为我们分析的副产物,我们在估计误差上获得了最大似然估计的估计误差,这可能具有独立兴趣。最后,我们在实验上展示了我们的光谱估计器在实践中非常有效,与现有的最先进技术相比,在较低的计算成本下生产可比或优异质量的初始化。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Objective: Social Determinants of Health (SDOH) influence personal health outcomes and health systems interactions. Health systems capture SDOH information through structured data and unstructured clinical notes; however, clinical notes often contain a more comprehensive representation of several key SDOH. The objective of this work is to assess the SDOH information gain achievable by extracting structured semantic representations of SDOH from the clinical narrative and combining these extracted representations with available structured data. Materials and Methods: We developed a natural language processing (NLP) information extraction model for SDOH that utilizes a deep learning entity and relation extraction architecture. In an electronic health record (EHR) case study, we applied the SDOH extractor to a large existing clinical data set with over 200,000 patients and 400,000 notes and compared the extracted information with available structured data. Results: The SDOH extractor achieved 0.86 F1 on a withheld test set. In the EHR case study, we found 19\% of current tobacco users, 10\% of drug users, and 32\% of homeless patients only include documentation of these risk factors in the clinical narrative. Conclusions: Patients who are at-risk for negative health outcomes due to SDOH may be better served if health systems are able to identify SDOH risk factors and associated social needs. Structured semantic representations of text-encoded SDOH information can augment existing structured, and this more comprehensive SDOH representation can assist health systems in identifying and addressing social needs.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
We present a new convolution layer for deep learning architectures which we call QuadConv -- an approximation to continuous convolution via quadrature. Our operator is developed explicitly for use on unstructured data, and accomplishes this by learning a continuous kernel that can be sampled at arbitrary locations. In the setting of neural compression, we show that a QuadConv-based autoencoder, resulting in a Quadrature Convolutional Neural Network (QCNN), can match the performance of standard discrete convolutions on structured uniform data, as in CNNs, and maintain this accuracy on unstructured data.
translated by 谷歌翻译
模型校准衡量预测的概率估计与真实性可能性之间的一致性。正确的模型校准对于高风险应用至关重要。不幸的是,现代深层神经网络的校准不佳,损害了可信度和可靠性。由于组织边界的自然不确定性,医疗图像分割尤其遭受了这种情况。这对他们的损失功能感到愤怒,这有利于多数级别的过度自信。我们用Domino(一种域感知的模型校准方法)解决了这些挑战,该方法利用了类标签之间的语义混淆性和分层相似性。我们的实验表明,在头部图像分割中,我们受多米诺骨牌校准的深神经网络优于非校准模型和最先进的形态学方法。我们的结果表明,与这些方法相比,我们的方法可以始终如一地实现更好的校准,更高的准确性和更快的推理时间,尤其是在稀有类别上。该性能归因于我们的域知觉正规化,以告知语义模型校准。这些发现表明,班级标签之间语义联系在建立深度学习模型的信心中的重要性。该框架有可能提高通用医学图像分割模型的可信度和可靠性。本文的代码可在以下网址获得:https://github.com/lab-smile/domino。
translated by 谷歌翻译
对于医疗保健提供者提供适当的患者护理的准确和详细说明,包括患者时​​间表中的药物变化,至关重要。医疗保健提供者或患者本身可能会引发患者药物的改变。用药更改采用多种形式,包括处方药和相关剂量修饰。这些更改提供了有关患者整体健康以及导致当前护理的理由的信息。然后,未来的护理可以基于患者的最终状态。这项工作探讨了从自由文本临床注释中自动提取药物变化信息。上下文药物事件数据集(CMED)是临床注释的语料库,其注释可以通过多种变化相关的属性来表征药物变化,包括更改的类型(启动,停止,增加等),更改,时间性,时间性,时间性,时间性,时间性,时间。改变可能性和否定。使用CMED,我们确定了临床文本中的药物提及,并提出了三个新型的基于BERT的新型基于BERT的系统,以解决注释的药物变化特征。我们证明,我们建议的体系结构改善了对CMED的初始工作改善药物变更分类的性能。我们确定了0.959 F1的高性能的药物提及,我们提出的系统将药物变化及其属性分类为0.827 F1。
translated by 谷歌翻译
贝叶斯优化提供了一种优化昂贵黑匣子功能的有效方法。它最近已应用于流体动力学问题。本文研究并在一系列合成测试函数上从经验上比较了常见的贝叶斯优化算法。它研究了采集函数和训练样本数量的选择,采集功能的精确计算以及基于蒙特卡洛的方法以及单点和多点优化。该测试功能被认为涵盖了各种各样的挑战,因此是理想的测试床,以了解贝叶斯优化的性能,并确定贝叶斯优化表现良好和差的一般情况。这些知识可以用于应用程序中,包括流体动力学的知识,这些知识是未知的。这项调查的结果表明,要做出的选择与相对简单的功能不相关,而乐观的采集功能(例如上限限制)应首选更复杂的目标函数。此外,蒙特卡洛方法的结果与分析采集函数的结果相当。在目标函数允许并行评估的情况下,多点方法提供了更快的替代方法,但它可能需要进行更多的客观函数评估。
translated by 谷歌翻译
人工智能的最新趋势是将验证的模型用于语言和视觉任务,这些模型已经实现了非凡的表现,但也令人困惑。因此,以各种方式探索这些模型的能力对该领域至关重要。在本文中,我们探讨了模型的可靠性,在其中我们将可靠的模型定义为一个不仅可以实现强大的预测性能,而且在许多涉及不确定性(例如选择性预测,开放式设置识别)的决策任务上,在许多决策任务上表现出色,而且表现良好。强大的概括(例如,准确性和适当的评分规则,例如在分布数据集中和分发数据集上的对数可能性)和适应性(例如,主动学习,几乎没有射击不确定性)。我们设计了40个数据集的10种任务类型,以评估视觉和语言域上可靠性的不同方面。为了提高可靠性,我们分别开发了VIT-PLEX和T5-PLEX,分别针对视觉和语言方式扩展了大型模型。 PLEX极大地改善了跨可靠性任务的最先进,并简化了传统协议,因为它可以改善开箱即用的性能,并且不需要设计分数或为每个任务调整模型。我们演示了高达1B参数的模型尺寸的缩放效果,并预处理数据集大小最多4B示例。我们还展示了PLEX在具有挑战性的任务上的功能,包括零射门的开放式识别,主动学习和对话语言理解中的不确定性。
translated by 谷歌翻译