预计将在2026年促使新兴的无人机航空公司(UAV)服务市场达到584亿美元,促使常规将常规无人机运营促进到国家空域中的重大努力,以至于它们不会损害现有的安全水平。通过感觉和避免潜在的中空碰撞威胁,将提高无人机的商业用途,但是在缺乏可用的数据集时,该领域的研究是缺乏可用的数据集,因为它们昂贵且技术上是为了捕获。在本文中,我们为基于视觉的飞机检测提供了一个数据集。 DataSet由15个图像序列组成,其中包含55,521张固定翼飞机的图像,接近固定式接地的摄像头。还提供了地面真理标签和绩效基准。为了我们的知识,这是第一个在碰撞课程上学习中型固定翼飞机的第一个公共数据集。完整的数据集和地面真理标签在https://qcr.github.io/dataset/aircraft -collision-.c资料/航空公司
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The use of needles to access sites within organs is fundamental to many interventional medical procedures both for diagnosis and treatment. Safe and accurate navigation of a needle through living tissue to an intra-tissue target is currently often challenging or infeasible due to the presence of anatomical obstacles in the tissue, high levels of uncertainty, and natural tissue motion (e.g., due to breathing). Medical robots capable of automating needle-based procedures in vivo have the potential to overcome these challenges and enable an enhanced level of patient care and safety. In this paper, we show the first medical robot that autonomously navigates a needle inside living tissue around anatomical obstacles to an intra-tissue target. Our system leverages an aiming device and a laser-patterned highly flexible steerable needle, a type of needle capable of maneuvering along curvilinear trajectories to avoid obstacles. The autonomous robot accounts for anatomical obstacles and uncertainty in living tissue/needle interaction with replanning and control and accounts for respiratory motion by defining safe insertion time windows during the breathing cycle. We apply the system to lung biopsy, which is critical in the diagnosis of lung cancer, the leading cause of cancer-related death in the United States. We demonstrate successful performance of our system in multiple in vivo porcine studies and also demonstrate that our approach leveraging autonomous needle steering outperforms a standard manual clinical technique for lung nodule access.
translated by 谷歌翻译
在这项研究中,将放射学方法扩展到用于组织分类的光学荧光分子成像数据,称为“验光”。荧光分子成像正在出现在头颈部鳞状细胞癌(HNSCC)切除期间的精确手术引导。然而,肿瘤到正常的组织对比与靶分子表皮生长因子受体(EGFR)的异质表达的内在生理局限性混淆。验光学试图通过探测荧光传达的EGFR表达中的质地模式差异来改善肿瘤识别。从荧光图像样品中提取了总共1,472个标准化的验光特征。涉及支持矢量机分类器的监督机器学习管道接受了25个顶级功能的培训,这些功能由最小冗余最大相关标准选择。通过将切除组织的图像贴片分类为组织学确认的恶性肿瘤状态,将模型预测性能与荧光强度阈值方法进行了比较。与荧光强度阈值方法相比,验光方法在所有测试集样品中提供了一致的预测准确性(无剂量)(平均精度为89%vs. 81%; P = 0.0072)。改进的性能表明,将放射线学方法扩展到荧光分子成像数据为荧光引导手术中的癌症检测提供了有希望的图像分析技术。
translated by 谷歌翻译
基于学习的控制方案最近表现出了出色的效力执行复杂的任务。但是,为了将它们部署在实际系统中,保证该系统在在线培训和执行过程中将保持安全至关重要。因此,我们需要安全的在线学习框架,能够自主地理论当前的信息是否足以确保安全或需要新的测量。在本文中,我们提出了一个由两个部分组成的框架:首先,在需要时积极收集测量的隔离外检测机制,以确保至少一个安全备份方向始终可供使用;其次,基于高斯的基于过程的概率安全 - 关键控制器可确保系统始终保持安全的可能性。我们的方法通过使用控制屏障功能来利用模型知识,并以事件触发的方式从在线数据流中收集测量,以确保学习的安全至关重要控制器的递归可行性。反过来,这又使我们能够提供具有很高概率的安全集的正式结果,即使在先验未开发的区域中也是如此。最后,我们在自适应巡航控制系统的数值模拟中验证了所提出的框架。
translated by 谷歌翻译
尽管经过验证的大型变压器模型已被证明具有很高的能力解决自然语言任务,但处理长序列输入仍然是一个重大挑战。这样的任务之一就是长输入摘要,其中输入比大多数预验证的模型的最大输入上下文更长。通过一系列广泛的实验,我们研究了哪些模型架构变化和预处理范式可以最有效地适应经过预定的变压器以进行长输入摘要。我们发现,带有全局编码器代币的交错,块状变压器可以达到良好的性能和效率平衡,并且在长序列上有意义地改善了下游摘要性能。根据我们的发现,我们介绍了Pegasus-X,这是Pegasus模型的扩展,并具有额外的长输入预处理,以处理最多16K令牌的输入。 Pegasus-X在长输入摘要任务上实现了强劲的性能,与更大的模型相当,同时添加了很少的其他参数,并且不需要模型并行训练。
translated by 谷歌翻译
目的:在存在相误差的情况下恢复QSM一直具有挑战性,这可能是由于脑出血和钙化病例的噪声或局部易感性变化引起的。我们为QSM提出了贝叶斯公式,其中使用两个组分的高斯混合分布来对长尾噪声(误差)分布进行建模,并设计具有自动和适应性参数估计的近似消息传递(AMP)算法。理论:敏感性图的小波系数遵循拉普拉斯分布。测量噪声遵循两个组分的高斯混合分布,其中第二高斯组件对噪声异常值进行了建模。分布参数被视为未知变量,并使用AMP共同恢复了易感性。方法:分别将具有参数估计的AMP与最新的非线性L1-QSM和MEDI方法进行比较,分别采用了L1-norm和L2-norm数据输入项。这三种方法对来自QSM挑战2.0的SIM2SNR1数据进行了测试,即健康和出血扫描中的体内数据。结果:在模拟的SIM2SNR1数据集上,AMP-PE达到了最低的NRMSE和SSIM,MEDI达到了最低的HFEN,并且在各种本地评估指标方面,每种方法都具有自己的强大诉讼。在体内数据集上,AMP-PE比L1-QSM和MEDI更好地保存结构细节和删除条纹伪像。结论:通过利用定制的高斯混合噪声,AMP-PE可以在涉及出血和钙化的具有挑战性的QSM病例上取得更好的性能。它配备了内置参数估计,从而避免了体内重建的通常视觉微调步骤的主观偏差。
translated by 谷歌翻译