作为当今最受欢迎的机器学习模型之一,Graph神经网络(GNN)最近引起了激烈的兴趣,其解释性也引起了人们的兴趣。用户对更好地了解GNN模型及其结果越来越感兴趣。不幸的是,当今的GNN评估框架通常依赖于合成数据集,从而得出有限范围的结论,因为问题实例缺乏复杂性。由于GNN模型被部署到更关键的任务应用程序中,因此我们迫切需要使用GNN解释性方法的共同评估协议。在本文中,据我们最大的知识,我们提出了针对GNN解释性的第一个系统评估框架,考虑了三种不同的“用户需求”的解释性:解释焦点,掩盖性质和掩蔽转换。我们提出了一个独特的指标,该指标将忠诚度措施结合在一起,并根据其足够或必要的质量对解释进行分类。我们将自己范围用于节点分类任务,并比较GNN的输入级解释性领域中最具代表性的技术。对于广泛使用的合成基准测试,令人惊讶的是,诸如个性化Pagerank之类的浅水技术在最小计算时间内具有最佳性能。但是,当图形结构更加复杂并且节点具有有意义的特征时,根据我们的评估标准,基于梯度的方法,尤其是显着性。但是,没有人在所有评估维度上占主导地位,而且总会有一个权衡。我们在eBay图上的案例研究中进一步应用了我们的评估协议,以反映生产环境。
translated by 谷歌翻译
检测欺诈性交易是控制​​电子商务市场风险的重要组成部分。除了已经在生产中部署的基于规则和机器学习过滤器外,我们还希望使用图形神经网络(GNN)进行有效的实时推理,这对于在事务图中捕获多跃风风险传播非常有用。但是,在生产中实施GNN时出现了两个挑战。首先,在消息传递中不应考虑以预测过去中的动态图中的未来信息。其次,图形查询和GNN模型推断的延迟通常高达数百毫秒,这对于某些关键的在线服务来说是昂贵的。为了应对这些挑战,我们提出了一个批处理和实时的成立图拓扑(BRIGHT)框架,以进行端到端的GNN学习,以允许有效的在线实时推理。 Bright框架由图形转换模块(两阶段有向图)和相应的GNN体系结构(Lambda神经网络)组成。两阶段的指示图保证了通过邻居传递的信息仅来自历史支付交易。它分别由代表历史关系和实时链接的两个子图组成。 Lambda神经网络将推断分为两个阶段:实体嵌入的批次推断和交易预测的实时推断。我们的实验表明,在平均W.R.T.〜精确度中,BRIGHT优于基线模型> 2 \%。此外,BRIGHT在实时欺诈检测上在计算上是有效的。关于端到端性能(包括邻居查询和推理),BRIGHT可以将P99延迟降低> 75 \%。对于推理阶段,与传统GNN相比,我们的加速平均为7.8美元。
translated by 谷歌翻译
无人驾驶飞机(UAV)跟踪对于诸如交货和农业等广泛应用具有重要意义。该领域的先前基准分析主要集中在小规模的跟踪问题上,同时忽略了数据模式的类型,目标类别和方案的多样性以及所涉及的评估协议的数量,从而极大地隐藏了深度无人机跟踪的巨大功能。在这项工作中,我们提出了迄今为止最大的公共无人机跟踪基准Webuav-3M,以促进深度无人机跟踪器的开发和评估。 Webuav-3M在4,500个视频中包含超过330万帧,并提供223个高度多样化的目标类别。每个视频都通过有效且可扩展的半自动目标注释(SATA)管道密集注释。重要的是,要利用语言和音频的互补优势,我们通过提供自然语言规格和音频描述来丰富Webuav-3M。我们认为,这种增加将大大促进未来的研究,以探索语言功能和音频提示,用于多模式无人机跟踪。此外,构建了scenario约束(UTUSC)评估协议和七个具有挑战性的场景子测验集,以使社区能够开发,适应和评估各种类型的高级跟踪器。我们提供了43个代表性跟踪器的广泛评估和详细分析,并设想了深度无人机跟踪及其他领域的未来研究方向。数据集,工具包和基线结果可在\ url {https://github.com/983632847/webuav-3m}中获得。
translated by 谷歌翻译
光保护综合技术的快速进展达到了真实和操纵图像之间的边界开始模糊的临界点。最近,一个由Mega-Scale Deep Face Forgery DataSet,由290万个图像组成和221,247个视频的伪造网络已被释放。它是迄今为止的数据规模,操纵(7个图像级别方法,8个视频级别方法),扰动(36个独立和更混合的扰动)和注释(630万个分类标签,290万操纵区域注释和221,247个时间伪造段标签)。本文报告了Forgerynet-Face Forgery Analysis挑战2021的方法和结果,它采用了伪造的基准。模型评估在私人测试集上执行离线。共有186名参加比赛的参与者,11名队伍提交了有效的提交。我们将分析排名排名的解决方案,并展示一些关于未来工作方向的讨论。
translated by 谷歌翻译
在线零售平台,积极检测交易风险至关重要,以提高客户体验,并尽量减少财务损失。在这项工作中,我们提出了一种可解释的欺诈行为预测框架,主要由探测器和解释器组成。 Xfraud探测器可以有效和有效地预测进货交易的合法性。具体地,它利用异构图形神经网络来从事务日志中的信息的非渗透键入实体中学习表达式表示。 Xfraud中的解释器可以从图表中生成有意义和人性化的解释,以便于业务部门中的进一步进程。在我们对具有高达11亿节点和37亿边缘的实际交易网络上的Xfraud实验中,XFraud能够在许多评估度量中倾销各种基线模型,同时在分布式设置中剩余可扩展。此外,我们表明,XFraud解释者可以通过定量和定性评估来显着帮助业务分析来产生合理的解释。
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR
translated by 谷歌翻译