在涉及互连单元之间相互影响或因果关系的现实现象中,平衡状态通常用图形模型中的循环表示。 \ textIt {关系因果模型}的表达性图形模型可以表示和理由,这些动态系统表现出此类周期或反馈循环。从观察数据中学习因果模型的现有循环因果发现算法假定,数据实例是独立的且分布相同的,这使得它们不适合关系因果模型。同时,关系因果模型的因果发现算法假定循环。在这项工作中,我们研究了基于约束的关系因果发现算法的必要条件,对于\ textit {Cyclic {Cyclicit {Cyclication {Cyclication {Cyclication {Cyclication {Cyclication {Cyclication {Cyclication {cyclication {Cyclication {Cyclication {Cyclication {Cyclication {Cyclication {cyclical otalational otinational Causal Models}}。我们介绍了\ textit {关系clclification},这是专门为关系模型设计的操作,可实现有关循环关系因果模型的可识别性的推理。我们表明,在关系周期性和$ \ sigma $ - 信仰的假设下,关系因果发现算法RCD(Maier等人,2013年)是合理的,对于环状模型而言是完整的。我们提出了实验结果以支持我们的主张。
translated by 谷歌翻译
独立测试在观察数据中的统计和因果推断中起着核心作用。标准独立测试假定数据样本是独立的,并且分布相同(i.i.d。),但是在以关系系统为中心的许多现实世界数据集和应用中违反了该假设。这项工作通过为影响个人实例的一组观测值定义足够的观察表,研究了从关系系统中估算独立性的问题。具体而言,我们通过将内核平均嵌入为关系变量的灵活聚合函数来定义关系数据的边际和条件独立性测试。我们提出了一个一致的,非参数,可扩展的内核测试,以对非I.I.D的关系独立性测试进行操作。一组结构假设下的观察数据。我们在经验上对各种合成和半合成网络进行了经验评估我们提出的方法,并证明了与基于最新内核的独立性测试相比其有效性。
translated by 谷歌翻译
估计治疗如何单独影响单位(称为异质治疗效果(HTE)估计)是决策和政策实施的重要组成部分。许多领域中大量数据的积累,例如医疗保健和电子商务,导致人们对开发数据驱动算法的兴趣增加,以估算观察性和实验数据中的异质效应。但是,这些方法通常对观察到的特征做出了强有力的假设,而忽略了基本的因果模型结构,从而导致HTE估计。同时,考虑到现实世界数据的因果结构很少是微不足道的,因为产生数据的因果机制通常是未知的。为了解决此问题,我们开发了一种功能选择方法,该方法考虑了每个功能的估计值,并从数据中学习了因果结构的相关部分。我们提供了有力的经验证据,表明我们的方法改善了在任意基本因果结构下的现有数据驱动的HTE估计方法。我们关于合成,半合成和现实世界数据集的结果表明,我们的特征选择算法导致HTE估计误差较低。
translated by 谷歌翻译
强大的机器学习是一个越来越重要的主题,专注于开发模型适应各种形式的不完美数据。由于在线技术中推荐制度的普遍性,研究人员进行了几项专注于数据稀疏性和轮廓注射攻击的鲁棒性研究。相反,我们为推荐系统提出了更全面的稳健性观点,包括多维尺寸 - 相对于子群体,转换,分布视差,攻击和数据稀疏性的鲁棒性。虽然有几个库允许用户比较不同的推荐系统模型,但没有软件库,可以在不同场景下对推荐系统模型进行全面的鲁棒性评估。作为我们的主要贡献,我们展示了一个强大的评估工具包,Recsys的强大健身房(Rgrecsys - https://www.github.com/salesforce/rgrecsys),它允许我们快速且统一地评估推荐系统模型的鲁棒性。
translated by 谷歌翻译
Objective: Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. Method: We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. Conclusion: The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. Significance: This study shows the potential of using semi-supervised GAN-based classification to improve bladder tissue classification when annotations are limited in multi-domain data.
translated by 谷歌翻译
The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: https://github.com/QiaoXiao7282/DSN.
translated by 谷歌翻译
While the problem of hallucinations in neural machine translation has long been recognized, so far the progress on its alleviation is very little. Indeed, recently it turned out that without artificially encouraging models to hallucinate, previously existing methods fall short and even the standard sequence log-probability is more informative. It means that characteristics internal to the model can give much more information than we expect, and before using external models and measures, we first need to ask: how far can we go if we use nothing but the translation model itself ? We propose to use a method that evaluates the percentage of the source contribution to a generated translation. Intuitively, hallucinations are translations "detached" from the source, hence they can be identified by low source contribution. This method improves detection accuracy for the most severe hallucinations by a factor of 2 and is able to alleviate hallucinations at test time on par with the previous best approach that relies on external models. Next, if we move away from internal model characteristics and allow external tools, we show that using sentence similarity from cross-lingual embeddings further improves these results.
translated by 谷歌翻译
We pose video object segmentation as spectral graph clustering in space and time, with one graph node for each pixel and edges forming local space-time neighborhoods. We claim that the strongest cluster in this video graph represents the salient object. We start by introducing a novel and efficient method based on 3D filtering for approximating the spectral solution, as the principal eigenvector of the graph's adjacency matrix, without explicitly building the matrix. This key property allows us to have a fast parallel implementation on GPU, orders of magnitude faster than classical approaches for computing the eigenvector. Our motivation for a spectral space-time clustering approach, unique in video semantic segmentation literature, is that such clustering is dedicated to preserving object consistency over time, which we evaluate using our novel segmentation consistency measure. Further on, we show how to efficiently learn the solution over multiple input feature channels. Finally, we extend the formulation of our approach beyond the segmentation task, into the realm of object tracking. In extensive experiments we show significant improvements over top methods, as well as over powerful ensembles that combine them, achieving state-of-the-art on multiple benchmarks, both for tracking and segmentation.
translated by 谷歌翻译
Metric Elicitation (ME) is a framework for eliciting classification metrics that better align with implicit user preferences based on the task and context. The existing ME strategy so far is based on the assumption that users can most easily provide preference feedback over classifier statistics such as confusion matrices. This work examines ME, by providing a first ever implementation of the ME strategy. Specifically, we create a web-based ME interface and conduct a user study that elicits users' preferred metrics in a binary classification setting. We discuss the study findings and present guidelines for future research in this direction.
translated by 谷歌翻译
Learning-based image compression has improved to a level where it can outperform traditional image codecs such as HEVC and VVC in terms of coding performance. In addition to good compression performance, device interoperability is essential for a compression codec to be deployed, i.e., encoding and decoding on different CPUs or GPUs should be error-free and with negligible performance reduction. In this paper, we present a method to solve the device interoperability problem of a state-of-the-art image compression network. We implement quantization to entropy networks which output entropy parameters. We suggest a simple method which can ensure cross-platform encoding and decoding, and can be implemented quickly with minor performance deviation, of 0.3% BD-rate, from floating point model results.
translated by 谷歌翻译