Consider the relationship between a regulator (the principal) and a pharmaceutical company (the agent). The pharmaceutical company wishes to sell a product to make a profit, and the FDA wishes to ensure that only efficacious drugs are released to the public. The efficacy of the drug is not known to the FDA, so the pharmaceutical company must run a costly trial to prove efficacy to the FDA. Critically, the statistical protocol used to establish efficacy affects the behavior of a strategic, self-interested pharmaceutical company; a lower standard of statistical evidence incentivizes the pharmaceutical company to run more trials for drugs that are less likely to be effective, since the drug may pass the trial by chance, resulting in large profits. The interaction between the statistical protocol and the incentives of the pharmaceutical company is crucial to understanding this system and designing protocols with high social utility. In this work, we discuss how the principal and agent can enter into a contract with payoffs based on statistical evidence. When there is stronger evidence for the quality of the product, the principal allows the agent to make a larger profit. We show how to design contracts that are robust to an agent's strategic actions, and derive the optimal contract in the presence of strategic behavior.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译
一种共同的销售策略涉及让帐户高管(AES)积极联系并与潜在客户联系。但是,并非所有的接触尝试都有积极的效果:有些尝试不会改变客户的决策,而另一些尝试甚至可能会干扰所需的结果。在这项工作中,我们建议使用因果推断来估计与每个潜在客户联系并相应地制定联系政策的效果。我们从在线珠宝市场worthy.com上证明了这种方法。我们研究了有价值的业务流程,以确定相关的决策和结果,并对他们制定的方式进行正式的假设。使用因果工具,我们选择了一个决策点,改善AE接触活动似乎是有希望的。然后,我们制定了一个个性化的政策,建议仅与对其有益的客户联系。最后,我们在3个月内验证了A \ B测试中的结果,从而导致目标人群的项目交付率增加了22%(p值= 0.026)。现在,该政策正在持续使用。
translated by 谷歌翻译
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
translated by 谷歌翻译
可拍照的分子显示了可以使用光访问的两个或多个异构体形式。将这些异构体的电子吸收带分开是选择性解决特定异构体并达到高光稳态状态的关键,同时总体红色转移带来的吸收带可以限制因紫外线暴露而限制材料损害,并增加了光疗法应用中的渗透深度。但是,通过合成设计将这些属性工程为系统仍然是一个挑战。在这里,我们提出了一条数据驱动的发现管道,用于由数据集策划和使用高斯过程的多任务学习支撑的分子照片开关。在对电子过渡波长的预测中,我们证明了使用来自四个Photoswitch转变波长的标签训练的多输出高斯过程(MOGP)产生相对于单任务模型的最强预测性能,并且在操作上超过了时间依赖时间依赖性的密度理论(TD) -dft)就预测的墙壁锁定时间而言。我们通过筛选可商购的可拍摄分子库来实验验证我们提出的方法。通过此屏幕,我们确定了几个图案,这些基序显示了它们的异构体的分离电子吸收带,表现出红移的吸收,并且适用于信息传输和光电学应用。我们的策划数据集,代码以及所有型号均可在https://github.com/ryan-rhys/the-photoswitch-dataset上提供
translated by 谷歌翻译
现代深度学习系统的区别特征之一是,它们通常采用利用巨大数量的参数,通常在数百万中使用的神经网络架构。虽然这个范例对大型网络的性质启发了重要研究,但是致力于这些网络通常用于建模大型复杂数据集的事实,而且它们本身可能包含数百万甚至数十亿的约束的事实。在这项工作中,我们专注于这种高维制度,其中数据集大小和特征数量往往是无限的。我们分析随机重量矩阵$ W $和随机偏置向量$ B $的随机特征回归的性能$ f = f(wx + b)$ b $,获取用于渐近培训的确切公式,并对数据产生的数据进行测试错误一个线性教师模型。偏差的作用可以理解为参数化在激活功能上的分布,并且我们的分析直接推广到这种分布,即使是传统的附加偏差不表达的那些分布。有趣的是,我们发现非线性的混合物可以通过最好的单一非线性来改善训练和测试误差,这表明非线性的混合物可能对近似内核方法或神经网络架构设计有用。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译