Rates of missing data often depend on record-keeping policies and thus may change across times and locations, even when the underlying features are comparatively stable. In this paper, we introduce the problem of Domain Adaptation under Missingness Shift (DAMS). Here, (labeled) source data and (unlabeled) target data would be exchangeable but for different missing data mechanisms. We show that when missing data indicators are available, DAMS can reduce to covariate shift. Focusing on the setting where missing data indicators are absent, we establish the following theoretical results for underreporting completely at random: (i) covariate shift is violated (adaptation is required); (ii) the optimal source predictor can perform worse on the target domain than a constant one; (iii) the optimal target predictor can be identified, even when the missingness rates themselves are not; and (iv) for linear models, a simple analytic adjustment yields consistent estimates of the optimal target parameters. In experiments on synthetic and semi-synthetic data, we demonstrate the promise of our methods when assumptions hold. Finally, we discuss a rich family of future extensions.
translated by 谷歌翻译
我们介绍了在打开集标签偏移(OSL)下进行域适应的问题,该标签分布可以任意更改,并且在部署期间可能会到达新类,但是类别条件分布p(x | y)是域不变的。 OSLS在标签转移和未标记(PU)学习下适应域的域名。学习者的目标是两个方面:(a)估计目标标签分布,包括新颖的班级; (b)学习目标分类器。首先,我们建立了确定这些数量的必要条件。其次,在标签转移和PU学习方面的进步中,我们提出了针对利用黑盒预测变量的两项任务的实用方法。与典型的开放式域适应(OSDA)问题不同,该问题往往不适合且仅适合启发式方法,OSLS提供了一个适合原则性机械的良好问题。关于视觉,语言和医学数据集的众多半合成基准测试的实验表明,我们的方法始终超过OSDA基线,实现目标域精度的10--25%提高。最后,我们分析了提出的方法,建立了与真正的标签边缘和收敛到高斯设置中线性模型的最佳分类器的有限样本收敛性。代码可在https://github.com/acmi-lab/open-set-label-shift上找到。
translated by 谷歌翻译
现实世界机器学习部署的特点是源(训练)和目标(测试)分布之间的不匹配,可能导致性能下降。在这项工作中,我们研究了仅使用标记的源数据和未标记的目标数据来预测目标域精度的方法。我们提出了平均阈值的置信度(A​​TC),一种实用的方法,用于了解模型的置信度的阈值,预测精度作为模型置信度超过该阈值的未标记示例的分数。 ATC优于多种模型架构的先前方法,分发班次类型(例如,由于综合损坏,数据集再现或新颖的群体)和数据集(野外,想象成,品种,CNIST)。在我们的实验中,ATC估计目标性能$ 2 $ 2美元 - 比以前的方法更准确地获得4美元。我们还探讨了问题的理论基础,证明通常,识别精度与识别最佳预测因子一样难以识别,因此,任何方法的功效都依赖于(可能是未列区)假设对移位的性质。最后,在一些玩具分布中分析了我们的方法,我们提供了有关其工作时的见解。
translated by 谷歌翻译
本文研究了基于Laplacian Eigenmaps(Le)的基于Laplacian EIGENMAPS(PCR-LE)的主要成分回归的统计性质,这是基于Laplacian Eigenmaps(Le)的非参数回归的方法。 PCR-LE通过投影观察到的响应的向量$ {\ bf y} =(y_1,\ ldots,y_n)$ to to changbood图表拉普拉斯的某些特征向量跨越的子空间。我们表明PCR-Le通过SoboLev空格实现了随机设计回归的最小收敛速率。在设计密度$ P $的足够平滑条件下,PCR-le达到估计的最佳速率(其中已知平方$ l ^ 2 $ norm的最佳速率为$ n ^ { - 2s /(2s + d) )} $)和健美的测试($ n ^ { - 4s /(4s + d)$)。我们还表明PCR-LE是\ EMPH {歧管Adaptive}:即,我们考虑在小型内在维度$ M $的歧管上支持设计的情况,并为PCR-LE提供更快的界限Minimax估计($ n ^ { - 2s /(2s + m)$)和测试($ n ^ { - 4s /(4s + m)$)收敛率。有趣的是,这些利率几乎总是比图形拉普拉斯特征向量的已知收敛率更快;换句话说,对于这个问题的回归估计的特征似乎更容易,统计上讲,而不是估计特征本身。我们通过经验证据支持这些理论结果。
translated by 谷歌翻译
只给出了积极的例子和未标记的例子(来自正面和负数),我们可能希望估计准确的正面与负分类器。正式地,该任务分为两个子任务:(i)混合比例估计(MPE) - 确定未标记数据中的正例的分数; (ii)PU-Learning - 鉴于这样的估计,学习所需的正面与负分类器。不幸的是,两个问题的古典方法在高维设置中分解。与此同时,最近提出的启发式缺乏理论一致性,并效力依赖于近双车调谐。在本文中,我们提出了两种简单的技术:最好的箱估计(BBE)(用于MPE);而有条件的价值忽略风险(CVIR),对PU学习的简单目标。这两种方法都主导了先前的方法,并且对于BBE,我们建立正式保证,每当我们可以培训模型来干净地分离出一小部分积极示例的担保。我们的最终算法(TED)$ ^ N $,两种程序之间交替,显着改善了我们的混合比例估计器和分类器
translated by 谷歌翻译
为了评估泛化,机器学习科学家通常(i)涉及泛化差距,然后(训练后)插入经验风险,以获得真正风险的界限;或(ii)验证持续数据验证。但是,(i)通常会给过度分开的模型产生脏污保证。此外,(ii)缩小训练集及其保证侵蚀,每次重复抵押邮件集。在本文中,我们介绍了一种利用未标记数据来产生泛化界限的方法。通过随机标记的新鲜例子增强我们(标签)培训,我们以标准方式训练。每当分类器在清洁数据上实现低误差和嘈杂数据的高误差时,我们的绑定都会为真正风险提供紧密的上限。我们证明我们的界限有效期为0-1经验风险最小化,并通过梯度下降训练的线性分类器。由于早期学习现象,我们的方法与深度学习结合尤其有用,由此网络在嘈杂的标签前拟合真正的标签,但需要一个直观的假设。在经验上,在规范计算机视觉和NLP任务上,我们的绑定提供了不受空广的泛化保证,可密切跟踪实际性能。这项工作为从业者提供了一个选择,即使在未经看跌的数据不可用的情况下也能够认证深网络的泛化,并为随机标签噪声和泛化之间的关系提供理论洞察力。
translated by 谷歌翻译
This work builds on the models and concepts presented in part 1 to learn approximate dictionary representations of Koopman operators from data. Part I of this paper presented a methodology for arguing the subspace invariance of a Koopman dictionary. This methodology was demonstrated on the state-inclusive logistic lifting (SILL) basis. This is an affine basis augmented with conjunctive logistic functions. The SILL dictionary's nonlinear functions are homogeneous, a norm in data-driven dictionary learning of Koopman operators. In this paper, we discover that structured mixing of heterogeneous dictionary functions drawn from different classes of nonlinear functions achieve the same accuracy and dimensional scaling as the deep-learning-based deepDMD algorithm. We specifically show this by building a heterogeneous dictionary comprised of SILL functions and conjunctive radial basis functions (RBFs). This mixed dictionary achieves the same accuracy and dimensional scaling as deepDMD with an order of magnitude reduction in parameters, while maintaining geometric interpretability. These results strengthen the viability of dictionary-based Koopman models to solving high-dimensional nonlinear learning problems.
translated by 谷歌翻译
Koopman operators model nonlinear dynamics as a linear dynamic system acting on a nonlinear function as the state. This nonstandard state is often called a Koopman observable and is usually approximated numerically by a superposition of functions drawn from a dictionary. In a widely used algorithm, Extended Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed class of functions. Recently, deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD). The learned representation both (1) accurately models and (2) scales well with the dimension of the original nonlinear system. In this paper we analyze the learned dictionaries from deepDMD and explore the theoretical basis for their strong performance. We explore State-Inclusive Logistic Lifting (SILL) dictionary functions to approximate Koopman observables. Error analysis of these dictionary functions show they satisfy a property of subspace approximation, which we define as uniform finite approximate closure. Our results provide a hypothesis to explain the success of deep neural networks in learning numerical approximations to Koopman operators. Part 2 of this paper will extend this explanation by demonstrating the subspace invariant of heterogeneous dictionaries and presenting a head-to-head numerical comparison of deepDMD and low-parameter heterogeneous dictionary learning.
translated by 谷歌翻译
Cloud computing holds the promise of reduced costs through economies of scale. To realize this promise, cloud computing vendors typically solve sequential resource allocation problems, where customer workloads are packed on shared hardware. Virtual machines (VM) form the foundation of modern cloud computing as they help logically abstract user compute from shared physical infrastructure. Traditionally, VM packing problems are solved by predicting demand, followed by a Model Predictive Control (MPC) optimization over a future horizon. We introduce an approximate formulation of an industrial VM packing problem as an MILP with soft-constraints parameterized by the predictions. Recently, predict-and-optimize (PnO) was proposed for end-to-end training of prediction models by back-propagating the cost of decisions through the optimization problem. But, PnO is unable to scale to the large prediction horizons prevalent in cloud computing. To tackle this issue, we propose the Predict-and-Critic (PnC) framework that outperforms PnO with just a two-step horizon by leveraging reinforcement learning. PnC jointly trains a prediction model and a terminal Q function that approximates cost-to-go over a long horizon, by back-propagating the cost of decisions through the optimization problem \emph{and from the future}. The terminal Q function allows us to solve a much smaller two-step horizon optimization problem than the multi-step horizon necessary in PnO. We evaluate PnO and the PnC framework on two datasets, three workloads, and with disturbances not modeled in the optimization problem. We find that PnC significantly improves decision quality over PnO, even when the optimization problem is not a perfect representation of reality. We also find that hardening the soft constraints of the MILP and back-propagating through the constraints improves decision quality for both PnO and PnC.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译