The link with exponential families has allowed $k$-means clustering to be generalized to a wide variety of data generating distributions in exponential families and clustering distortions among Bregman divergences. Getting the framework to work above exponential families is important to lift roadblocks like the lack of robustness of some population minimizers carved in their axiomatization. Current generalisations of exponential families like $q$-exponential families or even deformed exponential families fail at achieving the goal. In this paper, we provide a new attempt at getting the complete framework, grounded in a new generalisation of exponential families that we introduce, tempered exponential measures (TEM). TEMs keep the maximum entropy axiomatization framework of $q$-exponential families, but instead of normalizing the measure, normalize a dual called a co-distribution. Numerous interesting properties arise for clustering such as improved and controllable robustness for population minimizers, that keep a simple analytic form.
translated by 谷歌翻译
在本文中,我们通过将歧管学习步骤纳入鉴别器来改善生成的对抗网络。我们考虑了基于位置约束的线性和子空间的歧管,以及地方约束的非线性歧管。在我们的设计中,歧管学习和编码步骤与鉴别器的层交织在一起,其目标是将中间特征表示吸引到歧管上。我们自适应地平衡特征表示和歧管视图之间的差异,这代表了在歧管上的去噪和精炼歧管之间的折衷。我们得出结论,由于它们的不均匀密度和平滑度,地区限制的非线性歧管具有上部的线性歧管。我们对不同最近最新的基线显示出实质性的改进。
translated by 谷歌翻译
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
translated by 谷歌翻译
在许多方面,对机器学习问题的研究很难与所使用的损失函数的研究分开。询问的一种途径是通过适当复合表示形式将这些损失函数视为评分规则,其中预测被映射到概率分布,然后通过评分规则对此进行评分。但是,迄今为止最近的研究主要关注的是分析输出空间上(通常)有限维的条件风险问题,从而使较大的总风险最小化。我们将许多结果概括为无限的维度设置,因此我们能够利用密度和条件密度估计的家族相似之处,以提供规范链路的简单表征。
translated by 谷歌翻译
我们介绍了一系列成对随机梯度估计,用于期望的梯度,与日志衍生物特征有关,但涉及样本之间的成对交互。我们的新估计器的最简单示例被称为基本特技估计器,从A)引入并逼近基于微积分的基本定理,或B)将Reparameterisisisisisation技巧应用于无限扰动下的隐式参数化的整体表示参数。从前透视我们概括到再现内核希尔伯特空间表示,从上面提到的成对交互中产生了位置参数,产生了我们的代表技巧估计器。得到的估计器是无偏见的,并显示用于与日志导数估计器相比提供有用信息的独立组件。我们提供了进一步的新颖理论分析,其进一步表征了新技术所提供的差异。有希望的分析和数值例子证实了新估算器后面的理论和直觉。
translated by 谷歌翻译
We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures -stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers -demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译
The broad usage of mobile devices nowadays, the sensitiveness of the information contained in them, and the shortcomings of current mobile user authentication methods are calling for novel, secure, and unobtrusive solutions to verify the users' identity. In this article, we propose TypeFormer, a novel Transformer architecture to model free-text keystroke dynamics performed on mobile devices for the purpose of user authentication. The proposed model consists in Temporal and Channel Modules enclosing two Long Short-Term Memory (LSTM) recurrent layers, Gaussian Range Encoding (GRE), a multi-head Self-Attention mechanism, and a Block-Recurrent structure. Experimenting on one of the largest public databases to date, the Aalto mobile keystroke database, TypeFormer outperforms current state-of-the-art systems achieving Equal Error Rate (EER) values of 3.25% using only 5 enrolment sessions of 50 keystrokes each. In such way, we contribute to reducing the traditional performance gap of the challenging mobile free-text scenario with respect to its desktop and fixed-text counterparts. Additionally, we analyse the behaviour of the model with different experimental configurations such as the length of the keystroke sequences and the amount of enrolment sessions, showing margin for improvement with more enrolment data. Finally, a cross-database evaluation is carried out, demonstrating the robustness of the features extracted by TypeFormer in comparison with existing approaches.
translated by 谷歌翻译
Analogical proportions compare pairs of items (a, b) and (c, d) in terms of their differences and similarities. They play a key role in the formalization of analogical inference. The paper first discusses how to improve analogical inference in terms of accuracy and in terms of computational cost. Then it indicates the potential of analogical proportions for explanation. Finally, it highlights the close relationship between analogical proportions and multi-valued dependencies, which reveals an unsuspected aspect of the former.
translated by 谷歌翻译