属性值提取是指识别来自产品信息的感兴趣属性的值的任务。产品属性值在许多电子商务方案中是必不可少的,例如客户服务机器人,产品排名,检索和建议。在现实世界中,产品的属性值通常不完整并随着时间的变化而变化,这极大地阻碍了实际应用。在本文中,我们介绍了一个新的数据集,以更好地促进产品属性值提取的研究。 Mave由亚马逊页面的策划组220万产品组成,跨越1257个独特类别的300万个属性值注释。 Mave有四个主要和独特的优势:首先,Mave是由属性值示例的数量的最大产品属性值提取数据集。其次,MAVE包括来自产品的多源表示,其捕获具有高属性覆盖的完整产品信息。第三,Mave表示相对于先前的数据集覆盖范围的更多样化的属性和值。最后,Mave提供了一个非常具有挑战性的零点测试集,因为我们经验在实验中说明。我们进一步提出了一种新的方法,它有效地从多源产品信息中提取了属性值。我们使用几个基线进行广泛的实验,并显示MAVE是属性值提取任务的有效数据集。它在零拍摄属性提取也是一个非常具有挑战性的任务。数据可在{\ it \ url {https://github.com/google-research-datasets/mave}}上获得。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
Whether based on models, training data or a combination, classifiers place (possibly complex) input data into one of a relatively small number of output categories. In this paper, we study the structure of the boundary--those points for which a neighbor is classified differently--in the context of an input space that is a graph, so that there is a concept of neighboring inputs, The scientific setting is a model-based naive Bayes classifier for DNA reads produced by Next Generation Sequencers. We show that the boundary is both large and complicated in structure. We create a new measure of uncertainty, called Neighbor Similarity, that compares the result for a point to the distribution of results for its neighbors. This measure not only tracks two inherent uncertainty measures for the Bayes classifier, but also can be implemented, at a computational cost, for classifiers without inherent measures of uncertainty.
translated by 谷歌翻译
Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on how to turn it into one that can be productively studied empirically. We first present an experimental design centered on choosing tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment following meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.
translated by 谷歌翻译
“感应头”是注意力头,它实现了一种简单的算法来完成令牌序列,例如[a] [b] ... [a] - > [b]。在这项工作中,我们提供了一个假设的初步和间接证据,即诱导头可能构成大型大型变压器模型中所有“文本学习”中大多数的机制(即减少在增加代币指数时损失的损失)。我们发现,诱导头在与秘密学习能力突然急剧上的急剧上升的位置完全相同,这是训练损失的颠簸。我们提出了六种互补的证据,认为诱导头可能是任何大小的变压器模型中一般性内部学习的机理来源。对于仅关注的小型模型,我们提供了有力的因果证据。对于具有MLP的较大模型,我们提供相关证据。
translated by 谷歌翻译
神经网络经常将许多无关的概念包装到一个神经元中 - 一种令人困惑的现象被称为“多疾病”,这使解释性更具挑战性。本文提供了一个玩具模型,可以完全理解多义,这是由于模型在“叠加”中存储其他稀疏特征的结果。我们证明了相变的存在,与均匀多型的几何形状的令人惊讶的联系以及与对抗性例子联系的证据。我们还讨论了对机械解释性的潜在影响。
translated by 谷歌翻译
统计决策问题是统计机器学习的基础。最简单的问题是二进制和多类分类以及类概率估计。其定义的核心是损失函数的选择,这是评估解决方案质量的手段。在本文中,我们从一个新的角度从基本的成分是具有特定结构的凸集,从而系统地开发了此类问题的损失函数理论。损耗函数定义为凸集的支持函数的子级别。因此,它是自动适当的(校准以估计概率)。这种观点提供了三个新颖的机会。它可以发展损失与(反)纳入之间的基本关系,而这似乎以前没有注意到。其次,它可以开发由凸集的计算诱导的损失的演算,从而允许不同损失之间的插值,因此是将损失定制到特定问题的潜在有用的设计工具。在此过程中,我们基于凸组集合的M-sums的现有结果,并大大扩展了现有的结果。第三,透视图导致了一种自然理论的“极性”(或“反向”)损失函数,这些函数源自凸集的极性二元,定义了损失,并形成了VOVK聚合算法的自然通用替代函数。
translated by 谷歌翻译
我们介绍了统计实验的两种新的信息度量,它们概括和包含$ \ phi $ -diverences,积分概率指标,$ \ mathfrak {n} $ - distances(mmd)和$(f,\ gamma)$ divergences $ divergences在两个或多个分布之间。这使我们能够在信息的度量与统计决策问题的贝叶斯风险之间得出简单的几何关系,从而将变异的$ \ phi $ -divergence代表扩展到多个分布,以完全对称的方式。在马尔可夫运营商的行动下,新的分歧家庭被关闭,该家族产生了信息处理平等,这是经典数据处理不平等的完善和概括。这种平等使人深入了解假设类别在经典风险最小化中的重要性。
translated by 谷歌翻译
我们研究语言模型是否可以评估自己主张的有效性,并预测他们能够正确回答的问题。我们首先表明,当以正确的格式提供时,较大的模型在多样化的多项选择和True/False问题上进行了很好的校准。因此,我们可以通过要求模型首先提出答案,然后评估其答案正确的概率“ p(true)”来对开放式采样任务进行自我评估。我们发现在各种任务中,P(true)的表现,校准和缩放令人鼓舞。当我们允许模型考虑自己的许多样本之前,在预测一种特定可能性的有效性之前,自我评估的性能进一步改善。接下来,我们研究是否可以培训模型来预测“ P(ik)”,即“我知道”问题的概率,而无需参考任何特定提出的答案。模型在预测P(IK)方面表现良好,并且在跨任务中部分概括,尽管它们在新任务上的P(IK)校准方面遇到了困难。预测的p(IK)概率在存在相关的原始材料的情况下以及对数学单词问题解决方案的提示也适当增加。我们希望这些观察结果为培训更诚实的模型提供了基础,并研究了诚实对模型模仿人类写作以外的其他目标培训的案例的普遍性。
translated by 谷歌翻译