高斯工艺(GPS)是贝叶斯非参数模型,由于其准确性和天然不确定性定量(UQ),因此在各种应用中流行。调整GP超参数对于确保预测准确性和不确定性的有效性至关重要。独特地估计多个超参数,例如Matern内核也可能是一个重大挑战。此外,大规模数据集中的培训GPS是一个高度活跃的研究领域:传统的最大似然超参数训练需要二次记忆以形成协方差矩阵并具有立方训练的复杂性。为了解决可扩展的超参数调整问题,我们提出了一种新型算法,该算法估算了Matern内核中的平滑度和长度尺度参数,以提高所得预测不确定性的鲁棒性。使用与超参数估计算法MUYGPS提供的计算框架中的合并预测算法相似的新型损失函数,我们在数值实验中证明了高度可伸缩性,同时保持了高度可伸缩性。
translated by 谷歌翻译
明显大小的时间变化(称为光曲线)是望远镜在长时间内捕获的感兴趣的观察统计。光曲线提供了空间域意识(SDA)目标(例如对象识别或姿势估计)作为潜在变量推理问题等目标的探索。与较高的精确仪器相比,来自货架上商业架子(COTS)摄像机的地面观测仍然很便宜,但是,有限的传感器可用性与嘈杂的观察结果相结合,可能会产生可能难以建模的gappy时间序列数据。这些外部因素混淆了对光曲线的自动开发,这使光曲线预测和外推成为应用的关键问题。传统上,使用基于扩散或基于示例的方法解决了图像或时间序列的完成问题。最近,由于学习复杂的非线性嵌入方面的经验成功,深度神经网络(DNNS)已成为首选工具。但是,DNN通常需要大量的培训数据,而这些数据不一定在查看单个卫星的光曲线的独特功能时可用。在本文中,我们提出了一种新的方法,可以使用高斯工艺(GPS)预测光曲线的缺失和未来数据点。 GPS是非线性概率模型,可推断后验分布在功能上并自然量化不确定性。但是,GP推理和培训的立方缩放是其在应用中采用的主要障碍。特别是,单个光曲线可以具有数十万个观测值,这远远超出了单个机器上常规GP的实际实现极限。因此,我们采用MUYGP,这是一种可扩展的框架,用于使用最近的邻居稀疏和局部交叉验证的GP模型的超参数估计。 muygps ...
translated by 谷歌翻译
Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of "negative" emotions. We argue that we must carefully consider whether and how to respond to users' emotions.
translated by 谷歌翻译
Statistical risk assessments inform consequential decisions such as pretrial release in criminal justice, and loan approvals in consumer finance. Such risk assessments make counterfactual predictions, predicting the likelihood of an outcome under a proposed decision (e.g., what would happen if we approved this loan?). A central challenge, however, is that there may have been unmeasured confounders that jointly affected past decisions and outcomes in the historical data. This paper proposes a tractable mean outcome sensitivity model that bounds the extent to which unmeasured confounders could affect outcomes on average. The mean outcome sensitivity model partially identifies the conditional likelihood of the outcome under the proposed decision, popular predictive performance metrics (e.g., accuracy, calibration, TPR, FPR), and commonly-used predictive disparities. We derive their sharp identified sets, and we then solve three tasks that are essential to deploying statistical risk assessments in high-stakes settings. First, we propose a doubly-robust learning procedure for the bounds on the conditional likelihood of the outcome under the proposed decision. Second, we translate our estimated bounds on the conditional likelihood of the outcome under the proposed decision into a robust, plug-in decision-making policy. Third, we develop doubly-robust estimators of the bounds on the predictive performance of an existing risk assessment.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.
translated by 谷歌翻译
Power dynamics in human-human communication can impact rapport-building and learning gains, but little is known about how power impacts human-agent communication. In this paper, we examine dominance behavior in utterances between middle-school students and a teachable robot as they work through math problems, as coded by Rogers and Farace's Relational Communication Control Coding Scheme (RCCCS). We hypothesize that relatively dominant students will show increased learning gains, as will students with greater dominance agreement with the robot. We also hypothesize that gender could be an indicator of difference in dominance behavior. We present a preliminary analysis of dominance characteristics in some of the transactions between robot and student. Ultimately, we hope to determine if manipulating the dominance behavior of a learning robot could support learning.
translated by 谷歌翻译
We introduce an unsupervised learning approach that combines the truncated singular value decomposition with convex clustering to estimate within-cluster directions of maximum variance/covariance (in the variables) while simultaneously hierarchically clustering (on observations). In contrast to previous work on joint clustering and embedding, our approach has a straightforward formulation, is readily scalable via distributed optimization, and admits a direct interpretation as hierarchically clustered principal component analysis (PCA) or hierarchically clustered canonical correlation analysis (CCA). Through numerical experiments and real-world examples relevant to precision medicine, we show that our approach outperforms traditional and contemporary clustering methods on underdetermined problems ($p \gg N$ with tens of observations) and scales to large datasets (e.g., $N=100,000$; $p=1,000$) while yielding interpretable dendrograms of hierarchical per-cluster principal components or canonical variates.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译