尽管促进机器学习(ML)公平的最新进展激增,但现有的主流方法主要需要培训或填充神经网络的整个权重以满足公平标准。但是,由于较大的计算和存储成本,低数据效率和模型隐私问题,对于那些大规模训练的模型来说,这通常是不可行的。在本文中,我们提出了一种称为FairreProgragr的新的通用公平学习范式,该范式结合了模型重编程技术。具体而言,Fairreprogrogram考虑了固定的神经模型,而是将输入一组扰动(称为公平触发器)附加到,该触发触发器在Min-Max公式下朝着公平标准调整为公平触发器。我们进一步介绍了一个信息理论框架,该框架解释了为什么以及在什么条件下,使用公平触发器可以实现公平目标。我们从理论和经验上都表明,公平触发器可以通过提供错误的人口统计信息来有效地掩盖固定ML模型的输出预测中的人口偏见,从而阻碍模型利用正确的人口统计信息来进行预测。对NLP和CV数据集进行的广泛实验表明,与在两个广泛使用的公平标准下,基于培训成本和数据依赖性的基于重新培训的方法相比,我们的方法可以实现更好的公平性改进。
translated by 谷歌翻译
作为最成功的AI驱动应用程序之一,推荐系统的目的是通过在我们生活的许多方面提供个性化建议,以有效而有效的方式帮助人们做出适当的决定,尤其是针对各种面向人类的在线服务,例如E-商务平台和社交媒体网站。在过去的几十年中,推荐系统的快速发展通过创造经济价值,节省时间和精力以及促进社会利益,从而使人类受益匪浅。但是,最近的研究发现,数据驱动的推荐系统可能会对用户和社会构成严重威胁,例如传播虚假新闻以操纵社交媒体网站中的公众舆论,扩大不公平为代表性不足的团体或在工作匹配服务中的个人,或从建议结果中推断隐私信息。因此,系统的可信赖性一直吸引着各个方面的关注,以减轻推荐系统引起的负面影响,以增强公众对推荐系统技术的信任。在这项调查中,我们提供了可信赖的推荐系统(TREC)的全面概述,特别关注六个最重要的方面;即安全与鲁棒性,非歧视与公平,解释性,隐私,环境福祉以及问责制和可审计性。对于每个方面,我们总结了最近的相关技术,并讨论了潜在的研究方向,以帮助未来实现值得信赖的推荐系统。
translated by 谷歌翻译
社会建议利用社会关系来增强建议的代表性学习。大多数社会推荐模型都将用户互动(协作领域)和社会关系(社会领域)的用户表示统一。但是,这种方法可能无法模拟用户在两个域中的异质行为模式,从而损害了用户表示的表现力。在这项工作中,为了解决这种局限性,我们为社会建议提出了一个新颖的截面对比度学习框架DCREC。更具体地说,我们建议从项目和社会域中学习分开的用户表示。此外,分离的对比度学习旨在在分散的用户表示之间进行社交建议之间的知识转移。各种现实世界数据集的全面实验证明了我们提出的模型的优势。
translated by 谷歌翻译
最近的研究表明,基于神经网络的深度推荐系统容易受到对抗性攻击的影响,攻击者可以将精心制作的虚假用户配置文件(即,伪造用户与之互动的一组项目)注入目标推荐系统,以实现恶意目的,例如促进或降低一组目标项目。由于安全性和隐私问题,在黑框设置下执行对抗性攻击更为实用,在黑框设置下,攻击者无法轻松访问目标系统的体系结构/参数和培训数据。但是,在Black-Box设置下生成高质量的假用户配置文件,对于目标系统的资源有限,这是一项挑战。为了应对这一挑战,在这项工作中,我们通过利用项目的属性信息(即项目知识图)引入了一种新颖的策略,这些信息可以公开访问并提供丰富的辅助知识来增强伪造用户配置文件的产生。更具体地说,我们提出了一项知识增强的黑框攻击框架(KGATTACK),以通过深度强化学习技术有效地学习攻击政策,其中知识图无缝集成到层次结构策略网络中,以生成伪造的用户配置文件,以表演对抗性黑色 - 黑色 - - 黑色 - 黑色 - 盒子攻击。在各种现实世界数据集上进行的全面实验证明了在黑框设置下提出的攻击框架的有效性。
translated by 谷歌翻译
In recent years, Graph Neural Networks (GNNs), which can naturally integrate node information and topological structure, have been demonstrated to be powerful in learning on graph data. These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key. However, building social recommender systems based on GNNs faces challenges. For example, the user-item graph encodes both interactions and their associated opinions; social relations have heterogeneous strengths; users involve in two graphs (e.g., the useruser social graph and the user-item graph). To address the three aforementioned challenges simultaneously, in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph and propose the framework GraphRec, which coherently models two graphs and heterogeneous strengths. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework GraphRec.
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
In this work, we propose a novel image reconstruction framework that directly learns a neural implicit representation in k-space for ECG-triggered non-Cartesian Cardiac Magnetic Resonance Imaging (CMR). While existing methods bin acquired data from neighboring time points to reconstruct one phase of the cardiac motion, our framework allows for a continuous, binning-free, and subject-specific k-space representation.We assign a unique coordinate that consists of time, coil index, and frequency domain location to each sampled k-space point. We then learn the subject-specific mapping from these unique coordinates to k-space intensities using a multi-layer perceptron with frequency domain regularization. During inference, we obtain a complete k-space for Cartesian coordinates and an arbitrary temporal resolution. A simple inverse Fourier transform recovers the image, eliminating the need for density compensation and costly non-uniform Fourier transforms for non-Cartesian data. This novel imaging framework was tested on 42 radially sampled datasets from 6 subjects. The proposed method outperforms other techniques qualitatively and quantitatively using data from four and one heartbeat(s) and 30 cardiac phases. Our results for one heartbeat reconstruction of 50 cardiac phases show improved artifact removal and spatio-temporal resolution, leveraging the potential for real-time CMR.
translated by 谷歌翻译
Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset, code, and models can be found at https://persuasion-deductiongame.socialai-data.org.
translated by 谷歌翻译
A key barrier to using reinforcement learning (RL) in many real-world applications is the requirement of a large number of system interactions to learn a good control policy. Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data. However, their performances suffer from the lack of exploration and the distributional shifts in trajectories once controllers are updated. Moreover, most RL methods require that all states are directly observed, which is difficult to be attained in many settings. To overcome these challenges, we propose a trajectory generation algorithm, which adaptively generates new trajectories as if the system is being operated and explored under the updated control policies. Motivated by the fundamental lemma for linear systems, assuming sufficient excitation, we generate trajectories from linear combinations of historical trajectories. For linear feedback control, we prove that the algorithm generates trajectories with the exact distribution as if they are sampled from the real system using the updated control policy. In particular, the algorithm extends to systems where the states are not directly observed. Experiments show that the proposed method significantly reduces the number of sampled data needed for RL algorithms.
translated by 谷歌翻译