Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
在过去几年中,水下车辆操纵器系统(UVMS)变得越来越小,越来越小,在计划和控制系统时,考虑操纵器和车辆之间的耦合力变得越来越重要。但是,处理这些力的典型方法需要媒介物的精确流体动力模型,并在操纵器上使用低级扭矩控制,这两者在现场都很少见。因此,许多UVMS控制方法都是基于运动学的,无法固有地解释这些效果。我们的工作通过训练模拟UVMS数据上的复发性神经网络来弥合运动学控制与动态之间的差距,以根据系统以前的状态预测将来车辆的音高。运动学计划者和控制者可以使用此指标来合并动态知识,而无需计算昂贵的模型,从而提高了他们执行水下操纵任务的能力。
translated by 谷歌翻译
最近,手语研究人员已转向手语解释的电视广播,包括(i)连续签名的视频和(ii)与音频内容相对应的字幕,作为易于使用和大规模的培训数据来源。此类数据可用性的一个关键挑战是缺乏标志注释。利用这种弱对准数据的先前工作仅发现字幕中的关键字与单个符号之间的稀疏对应关系。在这项工作中,我们提出了一个简单,可扩展的框架,以极大地增加自动注释的密度。我们的贡献如下:(1)我们通过使用同义词和字幕签名对齐来显着改善先前的注释方法; (2)我们将标志识别模型中的伪标签的价值作为标志发现的方式; (3)我们提出了一种新的方法,以增加基于内域示例的已知和未知类别的注释; (4)在Bobsl BSL手语语料库上,我们将自信自动注释的数量从670K增加到5M。我们将这些注释公开用于支持手语研究社区。
translated by 谷歌翻译
在本文中,我们得出了一种新方法来确定数据集的共享特征,通过采用联合非负矩阵分解并分析所得因素化。我们的方法使用两个数据集矩阵的联合分解$ x_1,x_2 $中的非负矩阵$ x_1 = as_1 = as_1,x_2 = as_2 $得出一个相似的度量,以确定$ x_1的共享基础的良好,x_1,x_2 $近似于每个dataset。我们还提出了基于此方法和学习分解的数据集距离度量。我们的方法能够成功地在图像和文本数据集中成功身份差异。潜在的应用包括分类,检测窃或其他操纵以及数据集之间的学习关系。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
现实世界中的数据是高维的:即使在压缩后,书籍,图像或音乐表演也很容易包含数十万个元素。但是,最常用的自回归模型,变压器非常昂贵,以缩放捕获这种远程结构所需的输入和层数。我们开发了感知者AR,这是一种自回归的模态 - 不合骨架构,它使用交叉注意力将远程输入映射到少数潜在的潜在,同时还可以维护端到端的因果关系掩盖。感知器AR可以直接进行十万个令牌,从而实现了实用的长篇小写密度估计,而无需手工制作的稀疏模式或记忆机制。当对图像或音乐进行培训时,感知器AR会生成具有清晰长期连贯性和结构的输出。我们的架构还获得了长期基准测试的最新可能性,包括64 x 64个Imagenet图像和PG-19书籍。
translated by 谷歌翻译
最近的工作表明,当AI的预测不可靠时,可以学会推迟人类的选择性预测系统的潜在好处,特别是提高医疗保健等高赌注应用中AI系统的可靠性。然而,大多数事先工作假定当他们解决预测任务时,人类行为仍然保持不变,作为人类艾队团队的一部分而不是自己。我们表明,通过执行实验来规定在选择性预测的背景下量化人AI相互作用的实验并非如此。特别是,我们研究将不同类型信息传送给人类的影响,了解AI系统的决定推迟。使用现实世界的保护数据和选择性预测系统,可以在单独工作的人体或AI系统上提高预期准确性,我们表明,这种消息传递对人类判断的准确性产生了重大影响。我们的结果研究了消息传递策略的两个组成部分:1)人类是否被告知AI系统的预测和2)是否被告知选择性预测系统的决定推迟。通过操纵这些消息传递组件,我们表明,通过通知人类推迟的决定,可以显着提高人类的性能,但不透露对AI的预测。因此,我们表明,考虑在设计选择性预测系统时如何传送到人类的决定是至关重要的,并且必须使用循环框架仔细评估人类-AI团队的复合精度。
translated by 谷歌翻译
社交媒体通常在选举活动中被公众使用,以表达他们对不同问题的看法。在各种社交媒体渠道中,Twitter为研究人员和政客提供了一个有效的平台,以探索有关经济和外交政策等广泛主题的公众舆论。当前的文献主要集中于分析推文的内容而无需考虑用户的性别。这项研究收集和分析了大量推文,并使用计算,人类编码和统计分析来识别2020年美国总统选举期间发布的300,000多个推文中的主题。我们的发现是基于广泛的主题,例如税收,气候变化和Covid-19-19。在主题中,女性和男性用户之间存在着显着差异,超过70%的主题。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译