我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
几次动作识别中面临的主要挑战是培训视频数据不足。为了解决此问题,该领域中的当前方法主要集中于在功能级别上设计算法,而对处理输入视频数据的关注很少。此外,现有的框架采样策略可能会省略时间和空间维度的关键行动信息,从而进一步影响视频利用效率。在本文中,我们提出了一个新颖的视频框架采样器,以进行几次动作识别以解决此问题,其中特定于任务的空间框架采样是通过时间选择器(TS)和空间放大器(SA)实现的。具体而言,我们的采样器首先以较小的计算成本扫描整个视频,以获得对视频帧的全球感知。 TS在选择最显着,随后的贡献的顶级框架方面发挥了作用。 SA通过使用显着图的指导来扩大关键区域来强调每个框架的歧视性信息。我们进一步采用任务自适应学习,根据手头的情节任务动态调整采样策略。 TS和SA的实现均可以端到端的优化为基础,从而通过大多数少数发动的动作识别方法促进了我们所提出的采样器的无缝集成。广泛的实验表明,在包括长期视频在内的各种基准测试中的表演都有显着提高。
translated by 谷歌翻译
现代视频对象分割(VOS)算法以顺序处理顺序实现了显着高的性能,而目前目前普遍的管道仍然表现出一些显而易见的不足,如累积误差,未知的鲁棒性或缺乏适当的解释工具。在本文中,我们将半监控视频对象分割问题放入循环工作流程中,并通过半监控VOS系统的固有循环属性来找到上面的缺陷。首先,循环机制包含在标准顺序流程中的循环机制可以产生更一致的像素 - 方识的表示。依赖于起始帧中的准确参考掩码,我们表明可以减轻错误传播问题。接下来,自然地将离线循环管道扩展到在线方式的简单梯度校正模块,可以突出显示结果的高频率和详细部分,以进一步提高分割质量,同时保持可行的计算成本。同时,这种校正可以保护网络免受干扰信号产生的严重性能下降。最后,我们基于梯度校正过程开发周期有效的接收领域(周期ERF),以提供新的视角,分析特定于对象的感兴趣区域。我们对Davis16,Davis17和Youtube-Vos有挑战性的基准进行全面的比较和详细分析,表明循环机制有助于提高分割质量,提高VOS系统的稳健性,并进一步提供不同VOS算法的定性比较和解释工作。该项目的代码可以在https://github.com/lyxok1/stm-trings找到
translated by 谷歌翻译
很少有动作识别旨在仅使用几个样本(支持)识别新颖的动作类(查询)。当前的大多数方法遵循公制学习范式,该范式学会比较视频之间的相似性。最近,已经观察到,直接测量这种相似性并不理想,因为不同的动作实例可能显示出独特的时间分布,从而导致查询和支持视频中严重的未对准问题。在本文中,我们从两个不同的方面释放了这个问题 - 行动持续时间的错位和动作演化错位。我们通过两阶段的动作对准网络(TA2N)顺序解决它们。第一阶段通过学习暂时的仿射变换来定位动作,该变换扭曲了每个视频功能的动作持续时间,同时否定了动作 - 欧元的功能(例如背景)。接下来,第二阶段协调查询功能通过执行时间重排和空间抵消预测来匹配支撑的时空动作演变。基准数据集上的广泛实验显示了该方法在实现最新性能方面的潜力,以获得几次动作识别。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
In this paper we derive a PAC-Bayesian-Like error bound for a class of stochastic dynamical systems with inputs, namely, for linear time-invariant stochastic state-space models (stochastic LTI systems for short). This class of systems is widely used in control engineering and econometrics, in particular, they represent a special case of recurrent neural networks. In this paper we 1) formalize the learning problem for stochastic LTI systems with inputs, 2) derive a PAC-Bayesian-Like error bound for such systems, 3) discuss various consequences of this error bound.
translated by 谷歌翻译
We demonstrate how efficient autonomous drone swarms can be in detecting and tracking occluded targets in densely forested areas, such as lost people during search and rescue missions. Exploration and optimization of local viewing conditions, such as occlusion density and target view obliqueness, provide much faster and much more reliable results than previous, blind sampling strategies that are based on pre-defined waypoints. An adapted real-time particle swarm optimization and a new objective function are presented that are able to deal with dynamic and highly random through-foliage conditions. Synthetic aperture sensing is our fundamental sampling principle, and drone swarms are employed to approximate the optical signals of extremely wide and adaptable airborne lenses.
translated by 谷歌翻译