对解剖学随时间变化的结构变化的临床研究可能会大大受益于人群水平的形状量化或时空统计形状建模(SSM)。这样的工具使患者器官周期或疾病进展相关的工具与群体有关。构造形状模型需要建立定量形状表示(例如,相应的地标)。基于粒子的形状建模(PSM)是一种数据驱动的SSM方法,可通过优化地标放置来捕获总体级别的形状变化。但是,它假设横断面研究设计,因此在代表形状随时间变化方面的统计能力有限。现有的建模时空或纵向形状变化的方法需要预定义的形状地图集和通常在横截面上构建的预先建造的形状模型。本文提出了一种受PSM方法启发的数据驱动方法,以直接从形状数据中学习人口级时空形状。我们介绍了一种新型的SSM优化方案,该方案产生了整个人群(受试者间)和跨时间序列(受试者内)的地标。我们将所提出的方法应用于心房 - 纤维化患者的4D心脏数据,并证明其在表示左心房动态变化方面的功效。此外,我们表明我们的方法在生成时间序列模型(线性动力学系统(LDS))方面优于时空SSM的基于图像的方法。 LDS使用通过我们的方法优化的时空形状模型拟合,可提供更好的概括和特异性,表明它准确地捕获了基本的时间依赖性。
translated by 谷歌翻译
统计形状建模(SSM)是一种有价值且强大的工具,可以生成复杂解剖结构的详细表示,该解剖结构可以实现定量分析和形状及其变化的比较。 SSM应用数学,统计和计算来将形状解析为定量表示(例如对应点或地标),这些表示将有助于回答有关整个人群解剖学变化的各种问题。复杂的解剖结构具有许多不同的部分,具有不同的相互作用或复杂的结构。例如,心脏是四腔解剖结构,腔室之间有几个共同的边界。对于在整个身体中充分灌注末端器官,必要的心脏腔室的协调和有效收缩是必要的。这些心脏共享边界内的细微形状变化可以表明潜在的病理变化,导致不协调的收缩和末端器官灌注不良。早期检测和稳健的量化可以洞悉理想的治疗技术和干预时机。但是,现有的SSM方法无法明确对共享边界的统计数据进行建模。本文提出了一种通用且灵活的数据驱动方法,用于构建具有共同边界的多器官解剖结构的统计形状模型,可捕获单个解剖学及其在整个人群中共享边界表面的形态和对齐变化。我们通过开发形状模型来证明使用双脑室心脏数据集的提议方法的有效性,从而在整个人群数据中始终如一地参数化心脏双脑室结构和介入的室内隔膜(共享边界表面)。
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
This paper presents a simple and effective visual prompting method for adapting pre-trained models to downstream recognition tasks. Our method includes two key designs. First, rather than directly adding together the prompt and the image, we treat the prompt as an extra and independent learnable component. We show that the strategy of reconciling the prompt and the image matters, and find that warping the prompt around a properly shrinked image empirically works the best. Second, we re-introduce two "old tricks" commonly used in building transferable adversarial examples, i.e., input diversity and gradient normalization, into visual prompting. These techniques improve optimization and enable the prompt to generalize better. We provide extensive experimental results to demonstrate the effectiveness of our method. Using a CLIP model, our prompting method sets a new record of 82.8% average accuracy across 12 popular classification datasets, substantially surpassing the prior art by +5.6%. It is worth noting that this prompting performance already outperforms linear probing by +2.1% and can even match fully fine-tuning in certain datasets. In addition, our prompting method shows competitive performance across different data scales and against distribution shifts. The code is publicly available at https://github.com/UCSC-VLAA/EVP.
translated by 谷歌翻译
Named Entity Recognition (NER) is an important and well-studied task in natural language processing. The classic CoNLL-2003 English dataset, published almost 20 years ago, is commonly used to train and evaluate named entity taggers. The age of this dataset raises the question of how well these models perform when applied to modern data. In this paper, we present CoNLL++, a new annotated test set that mimics the process used to create the original CoNLL-2003 test set as closely as possible, except with data collected from 2020. Using CoNLL++, we evaluate the generalization of 20+ different models to modern data. We observe that different models have very different generalization behavior. F\textsubscript{1} scores of large transformer-based models which are pre-trained on recent data dropped much less than models using static word embeddings, and RoBERTa-based and T5 models achieve comparable F\textsubscript{1} scores on both CoNLL-2003 and CoNLL++. Our experiments show that achieving good generalizability requires a combined effort of developing larger models and continuing pre-training with in-domain and recent data. These results suggest standard evaluation methodology may have under-estimated progress on named entity recognition over the past 20 years; in addition to improving performance on the original CoNLL-2003 dataset, we have also improved the ability of our models to generalize to modern data.
translated by 谷歌翻译
We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that violate relevant policies. Our approach extracts structured representations of check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. Using our baseline system, we show that human fact-checkers can identify 124 tweets per hour that violate Twitter's policies on COVID-19 misinformation. We will make our code, data, and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content.
translated by 谷歌翻译
In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our system can load high-level descriptions of chemistry experiments, perceive a dynamic workspace, and autonomously plan the required actions and motions to perform the given chemistry experiments with common tools found in the existing lab environment. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools. In this work, we demonstrate the utility of our framework on three pouring skills and two foundational chemical experiments for materials synthesis: solubility and recrystallization. More experiments and updated evaluations can be found at https://ac-rad.github.io/arc-icra2023.
translated by 谷歌翻译
Calculating an Air Quality Index (AQI) typically uses data streams from air quality sensors deployed at fixed locations and the calculation is a real time process. If one or a number of sensors are broken or offline, then the real time AQI value cannot be computed. Estimating AQI values for some point in the future is a predictive process and uses historical AQI values to train and build models. In this work we focus on gap filling in air quality data where the task is to predict the AQI at 1, 5 and 7 days into the future. The scenario is where one or a number of air, weather and traffic sensors are offline and explores prediction accuracy under such situations. The work is part of the MediaEval'2022 Urban Air: Urban Life and Air Pollution task submitted by the DCU-Insight-AQ team and uses multimodal and crossmodal data consisting of AQI, weather and CCTV traffic images for air pollution prediction.
translated by 谷歌翻译