由于事后解释方法越来越多地被利用以在高风险环境中解释复杂的模型,因此确保在包括少数群体在内的各个种群亚组中,所得解释的质量始终高。例如,与与其他性别相关的实例(例如,女性)相关的实例(例如,女性)的说明不应该是与其他性别相关的解释。但是,几乎没有研究能够评估通过最先进的解释方法在输出的解释质量上是否存在这种基于群体的差异。在这项工作中,我们通过启动确定基于群体的解释质量差异的研究来解决上述差距。为此,我们首先概述了构成解释质量以及差异尤其有问题的关键属性。然后,我们利用这些属性提出了一个新的评估框架,该框架可以通过最新方法定量测量解释质量的差异。使用此框架,我们进行了严格的经验分析,以了解是否出现了解释质量的基于小组的差异。我们的结果表明,当所解释的模型复杂且高度非线性时,这种差异更可能发生。此外,我们还观察到某些事后解释方法(例如,综合梯度,外形)更有可能表现出上述差异。据我们所知,这项工作是第一个强调和研究解释质量差异的问题。通过这样做,我们的工作阐明了以前未开发的方式,其中解释方法可能在现实世界决策中引入不公平。
translated by 谷歌翻译
现代神经影像学技术,例如扩散张量成像(DTI)和功能性磁共振成像(fMRI),使我们能够将人脑建模为脑网络或连接组。捕获大脑网络的结构信息和分层模式对于理解大脑功能和疾病状态至关重要。最近,图形神经网络(GNN)的有前途的网络表示能力促使许多基于GNN的方法用于脑网络分析。具体而言,这些方法应用功能聚合和全局池来将大脑网络实例转换为有意义的低维表示,用于下游大脑网络分析任务。但是,现有的基于GNN的方法通常忽略了不同受试者的大脑网络可能需要各种聚合迭代,并将GNN与固定数量的层一起学习所有大脑网络。因此,如何完全释放GNN促进大脑网络分析的潜力仍然是不平凡的。为了解决这个问题,我们提出了一个新颖的大脑网络表示框架,即BN-GNN,该框架搜索每个大脑网络的最佳GNN体系结构。具体而言,BN-GNN使用深度加固学习(DRL)来训练元派利,以自动确定给定脑网络所需的最佳特征聚合数(反映在GNN层的数量中)。在八个现实世界大脑网络数据集上进行的广泛实验表明,我们提出的BN-GNN提高了传统GNN在不同大脑网络分析任务上的性能。
translated by 谷歌翻译
我们研究公平的机器学习(ML)设置,其中“上游”模型开发人员的任务是生产公平的ML模型,该模型将被几个类似但独特的“下游”用户使用。这种设置引入了新的挑战,这些挑战因许多现有的公平干预措施而尚未解决,这与现有的批评相呼应,即当前方法并非在现实世界公平的ML用例的多元化需求中广泛适用。为此,我们通过采用基于分配的公平分类视图来解决向上/下流设置。具体而言,我们引入了一种新的公平定义,分布奇偶校验,该定义衡量了跨受保护组的结果分布的差异,并提出了一种后处理方法,以使用最佳运输技术来最大程度地减少此措施。我们证明我们的方法能够为所有下游用户,跨各种公平定义创造更公平的成果,并在推理时间内在未标记的数据上工作。我们通过与几种类似方法和四个基准任务进行比较,通过比较实验验证了这一主张。最终,我们认为可以通过开发特定的干预措施来产生更公平的分类结果。
translated by 谷歌翻译
通常,公平的机器学习研究集中在一个决策者上,并假设潜在的人口是静止的。但是,许多激励这项工作的关键领域的特征是与许多决策者的竞争市场。实际上,我们可能只期望其中的一部分采用任何非强制性公平意识的政策,这一情况是政治哲学家称之为部分合规性的情况。这种可能性提出了重要的问题:部分合规设置中决策主体的战略行为如何影响分配结果?如果K%的雇主要自愿采取公平性的干预措施,我们是否应该期望K%的进步(总计)对普遍采用的利益,或者部分合规性的动态是否会消除希望的好处?采用全球(与本地)观点会如何影响审计师的结论?在本文中,我们提出了一个简单的就业市场模型,利用模拟作为探索互动效应和激励效果对结果和审计指标的影响的工具。我们的主要发现是,在平衡下:(1)部分合规性(k%的雇主)可能导致远远远远远小于比例(k%)在全部合规性结果方面的进展; (2)当公平的雇主与全球(与本地)统计数据相匹配时,差距更为严重; (3)本地与全球统计数据的选择可以绘制符合规定与不符合雇主的公平性的表现的不同图片; (4)部分遵守当地均等措施可以引起极端的隔离。
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.
translated by 谷歌翻译
Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译