In dense image segmentation tasks (e.g., semantic, panoptic), existing methods can hardly generalize well to unseen image domains, predefined classes, and image resolution & quality variations. Motivated by these observations, we construct a large-scale entity segmentation dataset to explore fine-grained entity segmentation, with a strong focus on open-world and high-quality dense segmentation. The dataset contains images spanning diverse image domains and resolutions, along with high-quality mask annotations for training and testing. Given the high-quality and -resolution nature of the dataset, we propose CropFormer for high-quality segmentation, which can improve mask prediction using high-res image crops that provide more fine-grained image details than the full image. CropFormer is the first query-based Transformer architecture that can effectively ensemble mask predictions from multiple image crops, by learning queries that can associate the same entities across the full image and its crop. With CropFormer, we achieve a significant AP gain of $1.9$ on the challenging fine-grained entity segmentation task. The dataset and code will be released at http://luqi.info/entityv2.github.io/.
translated by 谷歌翻译
我们开发了一种文本到图像生成的方法,该方法由隐性视觉引导丢失和生成目标的组合驱动,该方法包含其他检索图像。与仅将文本作为输入的大多数现有文本到图像生成方法不同,我们的方法将跨模式搜索结果动态馈送到统一的训练阶段,从而提高了生成结果的质量,可控性和多样性。我们提出了一种新颖的超网调制的视觉文本编码方案,以预测编码层的重量更新,从而使视觉信息(例如布局,内容)有效地传输到相应的潜在域。实验结果表明,我们的模型以其他检索视觉数据的指导优于现有基于GAN的模型。在可可数据集上,与最先进的方法相比,我们实现了更好的$ 9.13 $,最高$ 3.5 \ times $ $。
translated by 谷歌翻译
视频场景图(Vidsgg)旨在将视频内容解析到场景图中,其中涉及对视频中的时尚上下文信息进行建模。但是,由于数据集中的长尾训练数据,现有Vidsgg模型的概括性能可能会受到时空条件偏置问题的影响。在这项工作中,从元学习的角度来看,我们提出了一个新颖的元视频场景图(MVSGG)框架来解决这种偏见问题。具体而言,要处理各种类型的时空条件偏差,我们的框架首先构建了一个支持集和一组查询集,其中每个查询集的数据分布与支持集W.R.T.的数据分布不同。一种条件偏见。然后,通过执行新颖的元训练和测试过程,以优化模型,以在支持集的训练后在这些查询集上获得良好的测试性能,我们的框架可以有效地指导该模型学会对偏见进行良好的概括。广泛的实验证明了我们提出的框架的功效。
translated by 谷歌翻译
为了提高实例级别检测/分割性能,现有的自我监督和半监督方法从未标记的数据提取非常任务 - 无关或非常任务特定的训练信号。我们认为这两种方法在任务特异性频谱的两端是任务性能的次优。利用太少的任务特定的培训信号导致底下地区任务的地面真理标签导致磨损,而相反的原因会在地面真理标签上过度装修。为此,我们提出了一种新的类别无关的半监督预测(CASP)框架,在提取来自未标记数据的训练信号中实现更有利的任务特异性平衡。与半监督学习相比,CASP通过忽略伪标签中的类信息并具有仅使用任务 - 不相关的未标记数据的单独预先预订阶段来减少训练信号的任务特异性。另一方面,CASP通过利用盒子/面具级伪标签来保留适量的任务特异性。因此,我们的预磨模模型可以更好地避免在下游任务上的FineTuned时避免在地面真理标签上抵抗/过度拟合。使用3.6M未标记的数据,我们在对象检测上实现了4.7%的显着性能增益。我们的预制模型还展示了对其他检测和分割任务/框架的优异可转移性。
translated by 谷歌翻译
段4K或6K超高分辨率图像需要在图像分割中考虑额外的计算考虑。常见的策略,如淡化采样,补丁裁剪和级联模型,不能妥善解决精度和计算成本之间的余额问题。由人类在粗糙到精确水平中连续地区分物体的影响,我们提出了用于超高分辨率分割任务的连续细化模型〜(CRM)。CRM连续将特征映射与细化目标保持一致,并聚合要重建这些图像的细节。此外,我们的CRM表明其具有填补低分辨率培训图像和超高分辨率测试之间的分辨率差距的重要概括能力。我们展示了定量的绩效评估和可视化,以表明我们的提出方法在图像分割细化方面是快速有效的。代码将在https://github.com/dvlab-research/entity发布。
translated by 谷歌翻译
开放词汇实例分段旨在分割没有掩码注释的新型类。这是减少艰苦的人类监督的重要一步。大多数现有的作品首先返回覆盖许多小说类的标题图像模型,然后在带有掩模注释的有限基础类上的Finetune。然而,单独从标题预先预先估望中学到的高级文本信息无法有效地编码像素明智分割所需的细节。为解决此问题,我们提出了一种跨模型伪标签框架,它通过在标题中对齐单词语义来生成培训伪掩模,其中具有图像中的对象掩码的可视特征。因此,我们的框架能够通过他们的单词语义来标记新颖的类别来自动训练学生模型。为了考虑伪掩模中的噪声,我们设计了一种强大的学生模型,通过估计掩模噪声水平来选择性地蒸馏掩模知识,因此减轻了嘈杂的伪掩模的不利影响。通过广泛的实验,我们展示了我们框架的有效性,我们在MS-Coco上显着提高了地图得分4.5%,与最先进的大规模打开图像和概念标题数据集有5.1%。
translated by 谷歌翻译
我们介绍了一个新的图像分段任务,称为实体分段(ES),该任务旨在在不预测其语义标签的情况下划分图像中的所有视觉实体(对象和填充)。通过删除类标签预测的需要,对此类任务培训的模型可以更多地关注提高分割质量。它具有许多实际应用,例如图像操纵和编辑,其中分割掩模的质量至关重要,但类标签不太重要。我们通过统一的方式调查第一次研究,以调查卷大中心的代表对分割事物和东西的可行性,并显示这种代表在es的背景下非常好。更具体地说,我们提出了一种类似的完全卷积的架构,具有两种新颖的模块,专门设计用于利用es的类无话和非重叠要求。实验表明,在分割质量方面设计和培训的模型显着优于流行的专用Panoptic分段模型。此外,可以在多个数据集的组合中容易地培训ES模型,而无需解决数据集合并中的标签冲突,并且在一个或多个数据集中培训的模型可以概括到未经看管域的其他测试数据集。代码已在https://github.com/dvlab-research/entity发布。
translated by 谷歌翻译
While inferring common actor states (such as position or velocity) is an important and well-explored task of the perception system aboard a self-driving vehicle (SDV), it may not always provide sufficient information to the SDV. This is especially true in the case of active emergency vehicles (EVs), where light-based signals also need to be captured to provide a full context. We consider this problem and propose a sequential methodology for the detection of active EVs, using an off-the-shelf CNN model operating at a frame level and a downstream smoother that accounts for the temporal aspect of flashing EV lights. We also explore model improvements through data augmentation and training with additional hard samples.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
A canonical algorithm for log-concave sampling is the Langevin Algorithm, aka the Langevin Diffusion run with some discretization stepsize $\eta > 0$. This discretization leads the Langevin Algorithm to have a stationary distribution $\pi_{\eta}$ which differs from the stationary distribution $\pi$ of the Langevin Diffusion, and it is an important challenge to understand whether the well-known properties of $\pi$ extend to $\pi_{\eta}$. In particular, while concentration properties such as isoperimetry and rapidly decaying tails are classically known for $\pi$, the analogous properties for $\pi_{\eta}$ are open questions with direct algorithmic implications. This note provides a first step in this direction by establishing concentration results for $\pi_{\eta}$ that mirror classical results for $\pi$. Specifically, we show that for any nontrivial stepsize $\eta > 0$, $\pi_{\eta}$ is sub-exponential (respectively, sub-Gaussian) when the potential is convex (respectively, strongly convex). Moreover, the concentration bounds we show are essentially tight. Key to our analysis is the use of a rotation-invariant moment generating function (aka Bessel function) to study the stationary dynamics of the Langevin Algorithm. This technique may be of independent interest because it enables directly analyzing the discrete-time stationary distribution $\pi_{\eta}$ without going through the continuous-time stationary distribution $\pi$ as an intermediary.
translated by 谷歌翻译