电子商务查询通常简短而模棱两可。因此,查询理解通常使用查询重写来消除用户输入查询。在使用电子商务搜索工具时,用户倾向于在购买之前输入多个搜索,我们称之为上下文。这些历史搜索包含有关用户真正购物意图的上下文见解。因此,对此类上下文信息进行建模对于更好的查询重写模型至关重要。但是,现有的查询重写模型忽略了用户的历史行为,而仅考虑即时搜索查询,这通常是一个简短的字符串,提供有关真实购物意图的有限信息。我们建议一个端到端的上下文感知查询重写模型来弥合此差距,从而考虑了搜索上下文。具体而言,我们的模型使用历史记录搜索查询及其包含的单词构建了会话图。然后,我们采用图形注意机制,该机制对交叉关系进行建模并计算会话的上下文信息。随后,模型通过使用聚合网络将上下文信息与即时搜索查询组合来计算会话表示。然后将会话表示形式解码以生成重写的查询。从经验上讲,我们证明了我们方法对各种指标下最先进的方法的优越性。在从线购物平台的内部数据上,通过介绍上下文信息,我们的模型在MRR(平均值等级)指标下取得了11.6%的改善,并在HIT@16度量指标(命中率指标)下提高了20.1%使用最佳基线方法(基于变压器的模型)。
translated by 谷歌翻译
推荐系统通常会从各种用户行为中学习用户兴趣,包括点击和点击后行为(例如,喜欢和喜欢)。但是,这些行为不可避免地表现出受欢迎程度的偏见,从而导致一些不公平的问题:1)对于具有相似质量,更受欢迎的物品的物品会获得更多的曝光; 2)更糟糕的是,受欢迎程度较低的流行物品可能会获得更多的曝光率。现有关于缓解流行偏见的工作会盲目消除偏见,通常忽略项目质量的影响。我们认为,不同用户行为(例如,转换率)之间的关系实际上反映了项目质量。因此,为了处理不公平的问题,我们建议通过考虑多种用户行为来减轻流行性偏见。在这项工作中,我们研究了多行为推荐中相互作用生成过程背后的因果关系。具体来说,我们发现:1)项目受欢迎程度是暴露的项目和用户的点击交互之间的混杂因素,导致第一个不公平; 2)一些隐藏的混杂因素(例如,项目生产者的声誉)影响了项目的流行和质量,导致第二次不公平。为了减轻这些混杂问题,我们提出了一个因果框架来估计因果效应,该因果效应利用后门调整以阻止混杂因素引起的后门路径。在推论阶段,我们消除了受欢迎程度的负面影响,并利用质量的良好效果进行推荐。在两个现实世界数据集上的实验验证了我们提出的框架的有效性,这在不牺牲建议准确性的情况下增强了公平性。
translated by 谷歌翻译
创建视频是为了表达情感,交换信息和分享经验。视频合成很长时间以来一直吸引了研究人员。尽管视觉合成的进步驱动了迅速的进展,但大多数现有研究都集中在提高框架的质量和之间的过渡上,而在生成更长的视频方面几乎没有取得进展。在本文中,我们提出了一种基于3D-VQGAN和Transformers的方法,以生成具有数千帧的视频。我们的评估表明,我们的模型在16架视频剪辑中培训了来自UCF-101,Sky TimeLapse和Taichi-HD数据集等标准基准测试片段,可以生成多样化,连贯和高质量的长视频。我们还展示了我们通过将时间信息与文本和音频结合在一起来生成有意义的长视频的方法的条件扩展。可以在https://songweige.github.io/projects/tats/index.html上找到视频和代码。
translated by 谷歌翻译
由不同形状和非线性形状变化引起的机器官的大变形,对医学图像配准产生了重大挑战。传统的注册方法需要通过特定变形模型迭代地优化目标函数以及细致的参数调谐,但在具有大变形的图像中具有有限的能力。虽然基于深度学习的方法可以从输入图像到它们各自的变形字段中的复杂映射,但它是基于回归的,并且容易被卡在局部最小值,特别是当涉及大变形时。为此,我们呈现随机策划者 - 演员 - 评论家(SPAC),这是一种新的加强学习框架,可以执行逐步登记。关键概念通过每次步骤连续地翘曲运动图像,以最终与固定图像对齐。考虑到在传统的强化学习(RL)框架中处理高维连续动作和状态空间有挑战性,我们向标准演员 - 评论家模型引入了一个新的概念“计划”,这是低维度,可以促进演员生成易于高维行动。整个框架基于无监督的培训,并以端到端的方式运行。我们在几个2D和3D医学图像数据集上评估我们的方法,其中一些包含大变形。我们的经验结果强调了我们的工作实现了一致,显着的收益和优于最先进的方法。
translated by 谷歌翻译
训练无模型的深度加强学习模型来解决图像到图像转换是困难的,因为它涉及高维连续状态和动作空间。在本文中,我们借鉴了最近的最大熵增强学习框架成功的灵感来设计用于挑战连续控制问题,在包括图像表示,产生和控制的高维连续空间上开发随机策略。这种方法的核心是随机演员 - 执行程序 - 批评者 - 评论家(SAEC),这是一个违法的演员 - 评论家模型,具有额外的excator来生成现实图像。具体地,该actor通过随机潜行动作侧重于高级表示和控制策略,以及明确地指示执行器生成用于操纵状态的低级动作。关于若干图像到图像转换任务的实验已经证明了在面对高维连续空间问题时所提出的SAEC的有效性和稳健性。
translated by 谷歌翻译
Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use selfattention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar 1 , which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. 2
translated by 谷歌翻译
We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译