由于通用的非语言自然交流方法可以在人类之间进行有效的沟通,因此在过去的几十年中,手势识别技术一直在稳步发展。基于手势识别的研究文章中已经提出了许多不同的策略,以尝试创建一个有效的系统,以使用物理传感器和计算机视觉将非语言自然通信信息发送给计算机。另一方面,超准确的实时系统直到最近才开始占据研究领域,每种系统都由于过去的限制(例如可用性,成本,速度和准确性)而采用了一系列方法。提出了一种基于计算机视觉的人类计算机交互工具,用于充当自然用户界面的手势识别应用程序。用户手上的虚拟手套标记将被创建并用作深度学习模型的输入,以实时识别手势。获得的结果表明,拟议的系统将在实时应用中有效,包括通过远程依恋和康复进行社交互动。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on how to turn it into one that can be productively studied empirically. We first present an experimental design centered on choosing tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment following meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.
translated by 谷歌翻译
在图像之间生成健壮和可靠的对应关系是多种应用程序的基本任务。为了在全球和局部粒度上捕获上下文,我们提出了Aspanformer,这是一种基于变压器的无探测器匹配器,建立在层次的注意力结构上,采用了一种新颖的注意操作,能够以自适应方式调整注意力跨度。为了实现这一目标,首先,在每个跨注意阶段都会回归流图,以定位搜索区域的中心。接下来,在中心周围生成一个采样网格,其大小不是根据固定的经验配置为固定的,而是根据与流图一起估计的像素不确定性的自适应计算。最后,在派生区域内的两个图像上计算注意力,称为注意跨度。通过这些方式,我们不仅能够维持长期依赖性,而且能够在高相关性的像素之间获得细粒度的注意,从而补偿基本位置和匹配任务中的零件平滑度。在广泛的评估基准上的最新准确性验证了我们方法的强匹配能力。
translated by 谷歌翻译
神经隐式功能最近显示了来自多个视图的表面重建的有希望的结果。但是,当重建无限或复杂的场景时,当前的方法仍然遭受过度复杂性和稳健性不佳。在本文中,我们介绍了RegSDF,这表明适当的点云监督和几何正规化足以产生高质量和健壮的重建结果。具体而言,RegSDF将额外的定向点云作为输入,并优化了可区分渲染框架内的签名距离字段和表面灯场。我们还介绍了这两个关键的正规化。第一个是在给定嘈杂和不完整输入的整个距离字段中平稳扩散签名距离值的Hessian正则化。第二个是最小的表面正则化,可紧凑并推断缺失的几何形状。大量实验是在DTU,BlendenDMV以及储罐和寺庙数据集上进行的。与最近的神经表面重建方法相比,RegSDF即使对于具有复杂拓扑和非结构化摄像头轨迹的开放场景,RegSDF也能够重建表面。
translated by 谷歌翻译