Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
几乎没有弹药对话状态跟踪(DST)模型,即使使用少量数据,也具有可靠准确性的用户请求。在本文中,我们介绍了一个无本体的几杆DST,并具有自我喂养的信念状态输入。自我喂养的信念状态输入通过总结以前的对话来提高多转向对话的准确性。另外,我们新制定了一个插槽辅助任务。这项新的辅助任务有助于分类对话中是否提到了一个插槽。我们的模型在Multiwoz 2.0上的四个域中获得了几次射门设置的最佳分数。
translated by 谷歌翻译
Stylegan最近的成功表明,预训练的Stylegan潜在空间对现实的视频生成很有用。但是,由于难以确定stylegan潜在空间的方向和幅度,因此视频中产生的运动通常在语义上没有意义。在本文中,我们提出了一个框架来通过利用多模式(声音图像文本)嵌入空间来生成现实视频。由于声音提供了场景的时间上下文,因此我们的框架学会了生成与声音一致的视频。首先,我们的声音反演模块将音频直接映射到Stylegan潜在空间中。然后,我们结合了基于夹子的多模式嵌入空间,以进一步提供视听关系。最后,提出的帧发电机学会在潜在空间中找到轨迹,该空间与相应的声音相干,并以层次结构方式生成视频。我们为声音引导的视频生成任务提供新的高分辨率景观视频数据集(视听对)。实验表明,我们的模型在视频质量方面优于最新方法。我们进一步显示了几种应用程序,包括图像和视频编辑,以验证我们方法的有效性。
translated by 谷歌翻译
神经影像技术的进步为我们提供了了解人类思维方式的新颖见解。功能磁共振成像(fMRI)是最流行和广泛使用的神经影像学技术,并且对基于fMRI的个体差异标记越来越感兴趣。但是,由于其高成本和从包括儿童和婴儿在内的特定人群获得的难度,其效用通常受到限制。 fMRI标记的替代标记或神经相关性将具有重要的实际含义,但是我们对fMRI标记的独立预测指标很少。在这里,使用机器学习(ML)模型和数据增强,我们从功能性近红外光谱学(FNIRS)的多元模式(一种便携式且相对便宜的光学神经图像技术)中预测了人类认知的良好fMRI标记。我们招募了50名人类参与者,他们执行了两项认知任务(停止信号任务和概率逆转学习任务),而在总共两次访问中的每个访问中,用FNIRS或fMRI测量了神经激活。使用ML模型和数据增强,我们可以预测来自前额叶皮层中48通道FNIRS激活的响应抑制或预测误差信号的良好fMRI标记。这些结果表明,FNIRS可能会提供fMRI激活的替代标记,这将扩大我们对包括婴儿在内的各种人群的理解。
translated by 谷歌翻译
我们呈现Nureality,一个虚拟现实'VR'环境,旨在测试车辆行为在城市交叉路口自主车辆和行人之间的相互作用中沟通意图的效果。在这个项目中,我们专注于表达行为作为行人的手段,即易于认识到AV运动的潜在意图。 VR是用于测试这些情况的理想工具,因为它可以被沉浸,并将受试者放入这些潜在的危险情景中而没有风险。 Nureality提供了一种新颖的和沉浸式虚拟现实环境,包括众多视觉细节(道路和建筑纹理,停放的汽车,摇曳的树肢)以及听觉细节(鸟儿唧唧喳喳,距离距离的汽车)。在这些文件中,我们呈现Nureality环境,其10个独特的车辆行为场景,以及每个场景的虚幻引擎和Autodesk Maya源文件。这些文件在www.nureality.org上公开发布为开源,以支持学术界,研究临界公平互动。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
Cellular automata (CA) captivate researchers due to teh emergent, complex individualized behavior that simple global rules of interaction enact. Recent advances in the field have combined CA with convolutional neural networks to achieve self-regenerating images. This new branch of CA is called neural cellular automata [1]. The goal of this project is to use the idea of idea of neural cellular automata to grow prediction machines. We place many different convolutional neural networks in a grid. Each conv net cell outputs a prediction of what the next state will be, and minimizes predictive error. Cells received their neighbors' colors and fitnesses as input. Each cell's fitness score described how accurate its predictions were. Cells could also move to explore their environment and some stochasticity was applied to movement.
translated by 谷歌翻译
There is a dramatic shortage of skilled labor for modern vineyards. The Vinum project is developing a mobile robotic solution to autonomously navigate through vineyards for winter grapevine pruning. This necessitates an autonomous navigation stack for the robot pruning a vineyard. The Vinum project is using the quadruped robot HyQReal. This paper introduces an architecture for a quadruped robot to autonomously move through a vineyard by identifying and approaching grapevines for pruning. The higher level control is a state machine switching between searching for destination positions, autonomously navigating towards those locations, and stopping for the robot to complete a task. The destination points are determined by identifying grapevine trunks using instance segmentation from a Mask Region-Based Convolutional Neural Network (Mask-RCNN). These detections are sent through a filter to avoid redundancy and remove noisy detections. The combination of these features is the basis for the proposed architecture.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at https://github.com/zoomin-lee/scene-scale-diffusion
translated by 谷歌翻译