There is no settled universal 3D representation for geometry with many alternatives such as point clouds, meshes, implicit functions, and voxels to name a few. In this work, we present a new, compelling alternative for representing shapes using a sequence of cross-sectional closed loops. The loops across all planes form an organizational hierarchy which we leverage for autoregressive shape synthesis and editing. Loops are a non-local description of the underlying shape, as simple loop manipulations (such as shifts) result in significant structural changes to the geometry. This is in contrast to manipulating local primitives such as points in a point cloud or a triangle in a triangle mesh. We further demonstrate that loops are intuitive and natural primitive for analyzing and editing shapes, both computationally and for users.
translated by 谷歌翻译
A diffusion model learns to predict a vector field of gradients. We propose to apply chain rule on the learned gradients, and back-propagate the score of a diffusion model through the Jacobian of a differentiable renderer, which we instantiate to be a voxel radiance field. This setup aggregates 2D scores at multiple camera viewpoints into a 3D score, and repurposes a pretrained 2D model for 3D data generation. We identify a technical challenge of distribution mismatch that arises in this application, and propose a novel estimation mechanism to resolve it. We run our algorithm on several off-the-shelf diffusion image generative models, including the recently released Stable Diffusion trained on the large-scale LAION dataset.
translated by 谷歌翻译
我们提出了快速的文本2stylegan,这是一种自然语言界面,可适应预先训练的甘体,以实现文本引导的人脸合成。利用对比性语言图像预训练(剪辑)的最新进展,在培训过程中不需要文本数据。Fast Text2Stylegan被配制为条件变异自动编码器(CVAE),可在测试时为生成的图像提供额外的控制和多样性。我们的模型在遇到新的文本提示时不需要重新训练或微调剂或剪辑。与先前的工作相反,我们不依赖于测试时间的优化,这使我们的方法数量级比先前的工作快。从经验上讲,在FFHQ数据集上,我们的方法提供了与先前的工作相比,自然语言描述中具有不同详细程度的自然语言描述中的图像。
translated by 谷歌翻译
现代的3D计算机视觉利用学习来增强几何推理,将图像数据映射到经典结构,例如成本量或外观限制,以改善匹配。这些体系结构根据特定问题进行了专门化,因此需要进行大量任务的调整,通常会导致域的泛化性能差。最近,通才变压器架构通过编码几何学先验作为输入而不是执行约束,在诸如光流和深度估计等任务中取得了令人印象深刻的结果。在本文中,我们扩展了这一想法,并建议学习一个隐式,多视图一致的场景表示,并在增加视图多样性之前引入了一系列3D数据增强技术作为几何感应。我们还表明,引入视图合成作为辅助任务进一步改善了深度估计。我们的深度磁场网络(定义)实现了最新的目的,可以实现立体声和视频深度估计,而无需明确的几何约束,并通过广泛的边距改善了零局部域的概括。
translated by 谷歌翻译
相机校准与机器人和计算机视觉算法是一体的,用于从可视输入流中推断场景的几何属性。在实践中,校准是一种艰苦的程序,需要专门的数据收集和仔细调整。每当相机变化的参数时,必须重复该过程,这可能是移动机器人和自主车辆的频繁发生。相反,自我监督的深度和自我运动估计方法可以通过推断优化视图综合目标的每个帧投影模型来绕过明确的校准。在本文中,我们扩展了这种方法,以明确校准野外Raw视频的各种相机。我们提出了一种学习算法,使用高效的一般相机模型来回归每序列校准参数。我们的程序通过子像素再分注意误差实现自校准结果,优于基于其他学习的方法。我们在各种相机几何形状上验证了我们的方法,包括透视,鱼眼和昏迷。最后,我们表明我们的方法导致深度估计下游任务的改进,在EUROC数据集中实现了最先进的计算效率,而不是当代方法。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
translated by 谷歌翻译
Very large language models such as GPT-3 have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3 to summarize a large collection of user reviews in a zero-shot fashion, notably approaches based on recursive summarization and selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews and a generic summarization dataset of Amazon and Yelp reviews, we show that the GPT-3 models achieve very strong performance in human evaluation. We argue that standard evaluation metrics do not reflect this, and evaluate against several new measures targeting faithfulness, factuality, and genericity to contrast these different methods.
translated by 谷歌翻译
在对关键安全环境的强化学习中,通常希望代理在所有时间点(包括培训期间)服从安全性限制。我们提出了一种称为Spice的新型神经符号方法,以解决这个安全的探索问题。与现有工具相比,Spice使用基于符号最弱的先决条件的在线屏蔽层获得更精确的安全性分析,而不会不适当地影响培训过程。我们在连续控制基准的套件上评估了该方法,并表明它可以达到与现有的安全学习技术相当的性能,同时遭受较少的安全性违规行为。此外,我们提出的理论结果表明,在合理假设下,香料会收敛到最佳安全政策。
translated by 谷歌翻译
GPT-3等模型的零和少量提示的最新成功导致了NLP研究的范式转移。在本文中,我们研究了其对文本摘要的影响,重点是新闻摘要的经典基准领域。首先,我们研究了零击GPT-3与在大型摘要数据集中训练的微调模型的比较。我们表明,不仅人类压倒性地更喜欢GPT-3摘要,而且这些摘要也不遭受普通数据集特异性问题(例如事实差的问题)。接下来,我们研究这对评估意味着什么,尤其是黄金标准测试集的作用。我们的实验表明,基于参考和无参考的自动指标,例如最近提出的基于质量检查或基于质量的事实方法无法可靠地评估零击摘要。最后,我们讨论了未来的研究挑战,除了通用摘要之外,特别是基于关键字和方面的摘要,表明了优势微调方法与零拍的提示相比如何。为了支持进一步的研究,我们发布:(a)在4个标准摘要基准中,从微调和零摄像模型中产生的10K生成的摘要,(b)1K人类偏好判断和比较不同系统的普通系统,以进行通用和关键字的不同系统。基于摘要。
translated by 谷歌翻译