语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
我们表明,GPT-3模型可以学会在不使用模型逻辑的情况下以自然语言来表达其自然语言答案的不确定性。当提出问题时,该模型同时产生答案和信心水平(例如“ 90%的置信度”或“高信心”)。这些级别映射到经过校准的概率。该模型在分配转移下还保持适度的校准,并且对自己的答案中的不确定性敏感,而不是模仿人类的例子。据我们所知,这是第一次证明模型对其自然语言的答案表达了校准的不确定性。为了测试校准,我们介绍了校准任务套件。我们比较了用单词(“语言概率”)表达的不确定性的校准与从模型逻辑提取的不确定性。两种不确定性都能够在分布变化下概括校准。我们还提供了证据表明,GPT-3概括校准的能力取决于预先训练的潜在表示,这些表征与其答案上的认知不确定性相关。
translated by 谷歌翻译
我们微调GPT-3使用基于文本的Web浏览环境来回答长形问题,允许模型搜索和导航Web。通过建立任务,以便通过人类执行,我们能够使用模仿学习培训在任务上的模型,然后通过人体反馈优化答案质量。为了使人为评估事实精度更容易,模型必须在浏览支持答案时收集引用。我们在ELI5上培训并评估我们的模型,Reddit用户提出的问题数据集。我们的最佳模型是通过使用行为克隆进行微调GPT-3获得的,然后对训练训练的奖励模型进行拒绝采样来获得以预测人类偏好。这种模式的答案是人类56%的答案,我们的人类示威者的时间和69%的时间到Reddit的最高投票答复。
translated by 谷歌翻译
最先进的语言模型可以在许多任务中匹配人类性能,但它们仍然努力努力执行多步数学推理。要诊断当前模型和支持研究的故障,我们介绍了GSM8K,是8.5k高质量的语言学级别学校数学词问题的数据集。我们发现即使是最大的变压器模型也无法实现高测试性能,尽管该问题分布的概念简单性。为了提高性能,我们提出培训验证者来判断模型完成的正确性。在测试时间,我们生成许多候选解决方案,并选择验证者排名最高的解决方案。我们证明,验证显着提高了GSM8K的性能,我们提供了强大的经验证据,即验证尺度更有效地具有比FineTuning基线的数据增加。
translated by 谷歌翻译
我们说,如果对其他超参数的更改可以在很大程度上补偿批处理大小的更改,则算法是批量尺寸不变的。随机梯度下降众所周知,该特性通过学习率将其具有小批量的大小。但是,由于它们如何控制策略更新的大小,因此某些策略优化算法(例如PPO)没有此属性。在这项工作中,我们展示了如何使这些算法批处理大小不变。我们的关键见解是将近端策略(用于控制策略更新)的近端策略(用于校正更正)。我们的实验有助于解释为什么这些算法起作用,并显示它们如何更有效地利用陈旧数据。
translated by 谷歌翻译
We introduce Procgen Benchmark, a suite of 16 procedurally generated game-like environments designed to benchmark both sample efficiency and generalization in reinforcement learning. We believe that the community will benefit from increased access to high quality training environments, and we provide detailed experimental protocols for using this benchmark. We empirically demonstrate that diverse environment distributions are essential to adequately train and evaluate RL agents, thereby motivating the extensive use of procedural content generation. We then use this benchmark to investigate the effects of scaling model size, finding that larger models significantly improve both sample efficiency and generalization.
translated by 谷歌翻译
Biological systems and processes are networks of complex nonlinear regulatory interactions between nucleic acids, proteins, and metabolites. A natural way in which to represent these interaction networks is through the use of a graph. In this formulation, each node represents a nucleic acid, protein, or metabolite and edges represent intermolecular interactions (inhibition, regulation, promotion, coexpression, etc.). In this work, a novel algorithm for the discovery of latent graph structures given experimental data is presented.
translated by 谷歌翻译
The Government of Kerala had increased the frequency of supply of free food kits owing to the pandemic, however, these items were static and not indicative of the personal preferences of the consumers. This paper conducts a comparative analysis of various clustering techniques on a scaled-down version of a real-world dataset obtained through a conjoint analysis-based survey. Clustering carried out by centroid-based methods such as k means is analyzed and the results are plotted along with SVD, and finally, a conclusion is reached as to which among the two is better. Once the clusters have been formulated, commodities are also decided upon for each cluster. Also, clustering is further enhanced by reassignment, based on a specific cluster loss threshold. Thus, the most efficacious clustering technique for designing a food kit tailored to the needs of individuals is finally obtained.
translated by 谷歌翻译
Over the past decade, neural networks have been successful at making predictions from biological sequences, especially in the context of regulatory genomics. As in other fields of deep learning, tools have been devised to extract features such as sequence motifs that can explain the predictions made by a trained network. Here we intend to go beyond explainable machine learning and introduce SEISM, a selective inference procedure to test the association between these extracted features and the predicted phenotype. In particular, we discuss how training a one-layer convolutional network is formally equivalent to selecting motifs maximizing some association score. We adapt existing sampling-based selective inference procedures by quantizing this selection over an infinite set to a large but finite grid. Finally, we show that sampling under a specific choice of parameters is sufficient to characterize the composite null hypothesis typically used for selective inference-a result that goes well beyond our particular framework. We illustrate the behavior of our method in terms of calibration, power and speed and discuss its power/speed trade-off with a simpler data-split strategy. SEISM paves the way to an easier analysis of neural networks used in regulatory genomics, and to more powerful methods for genome wide association studies (GWAS).
translated by 谷歌翻译
Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality. Project page: https://wnhsu.github.io/ReVISE.
translated by 谷歌翻译