基于可视异常检测的内存模块的重建方法试图缩小正常样品的重建误差,同时将其放大为异常样品。不幸的是,现有的内存模块不完全适用于异常检测任务,并且异常样品的重建误差仍然很小。为此,这项工作提出了一种新的无监督视觉异常检测方法,以共同学习有效的正常特征并消除不利的重建错误。具体而言,提出了一个新颖的分区内存库(PMB)模块,以有效地学习和存储具有正常样本语义完整性的详细特征。它开发了一种新的分区机制和一种独特的查询生成方法,以保留上下文信息,然后提高内存模块的学习能力。替代探索了拟议的PMB和跳过连接,以使异常样品的重建更糟。为了获得更精确的异常定位结果并解决了累积重建误差的问题,提出了一个新型的直方图误差估计模块,以通过差异图像的直方图自适应地消除了不利的误差。它可以改善异常本地化性能而不会增加成本。为了评估所提出的异常检测和定位方法的有效性,在三个广泛使用的异常检测数据集上进行了广泛的实验。与基于内存模块的最新方法相比,提出的方法的令人鼓舞的性能证明了其优越性。
translated by 谷歌翻译
在工业应用中,无监督的异常检测是一项艰巨的任务,因为收集足够的异常样品是不切实际的。在本文中,通过共同探索锻造异常样品的有效生成方法和正常样品特征作为分割异常检测的指导信息,提出了一种新颖的自我监督指导性分割框架(SGSF)。具体而言,为确保生成的锻造异常样品有利于模型训练,提出了显着性增强模块(SAM)。 Sam引入了显着图来产生显着性Perlin噪声图,并制定了一种自适应分割策略,以在显着区域产生不规则的掩模。然后,将口罩用于生成伪造的异常样品作为训练的负样本。不幸的是,锻造和真实异常样品之间的分布差距使得基于锻造样品训练的模型难以有效定位真实异常。为此,提出了自我监督的指导网络(SGN)。它利用自我监督的模块提取无噪声的功能,并包含正常的语义信息作为分割模块的先验知识。分割模块具有正常模式段的知识,这些片段与指导特征不同。为了评估SGSF对异常检测的有效性,在三个异常检测数据集上进行了广泛的实验。实验结果表明,SGSF达到了最新的异常检测结果。
translated by 谷歌翻译
动机,情感和行动是人类活动中相关的基本因素。尽管长期以来一直认为动机和情感是探索人们如何在人类活动中采取行动的核心,但几乎没有研究支持分析人类精神状态与行动之间的关系。我们介绍了第一项研究,该研究研究了基于语言的人类活动中建模动机,情感和行动的生存能力,即逗号(人类活动的认知框架)。在逗号的指导下,我们定义了三个自然语言处理任务(情感理解,动机理解和有条件的动作生成),并通过自动从故事常识中提取样本来建立一个具有挑战性的数据集冰雹。 NLP应用程序的实验结果证明了建模关系的有效性。此外,与现有方法相比,受逗号启发的模型可以更好地揭示动机,情感和行动之间的基本关系。
translated by 谷歌翻译
气道分割对于检查,诊断和预后的肺部疾病至关重要,而其手动描述则不当。为了减轻这种耗时且潜在的主观手动程序,研究人员提出了从计算机断层扫描(CT)图像自动分割气道的方法。但是,一些小型气道分支(例如,支气管和终末支气管)显着加剧了通过机器学习模型的自动分割难度。特别是,气道分支中体素值和严重的数据失衡的方差使计算模块容易导致不连续和假阴性预测。注意机制表明了分割复杂结构的能力,而模糊逻辑可以减少特征表示的不确定性。因此,由模糊注意力层给出的深度注意力网络和模糊理论的整合应该是升级的解决方案。本文提出了一种有效的气道分割方法,包括一个新型的模糊注意力神经网络和全面的损失函数,以增强气道分割的空间连续性。深层模糊集由特征图中的一组体素和可学习的高斯成员功能制定。与现有的注意机制不同,所提出的特异性模糊注意力解决了不同渠道中异质特征的问题。此外,提出了一种新的评估指标来评估气道结构的连续性和完整性。该方法的效率已通过在包括精确的09和LIDC数据集在内的开放数据集上进行测试,以及我们的内部Covid-19和纤维化肺病数据集证明了这一建议的效率。
translated by 谷歌翻译
有说服力的战略认可任务要求该系统根据对话识别说服者的采用策略。但是,以前的方法主要集中在上下文信息上,关于纳入心理反馈,即说服的情绪以预测策略知之甚少。在本文中,我们提出了一个跨渠道反馈记忆网络(CFO-NET),以利用情感反馈来迭代地衡量策略的潜在好处,并将其纳入上下文感知的对话信息中。具体而言,CFO-NET设计一个反馈内存模块,包括策略池和反馈池,以获得情感感知的策略表示。该策略池旨在存储历史策略,反馈池是根据反馈情感信息获得更新的策略权重。此外,开发了跨通道融合预测指标,以在情绪感知的策略表示与情境意识的对话信息之间进行相互互动,以供战略识别。 \ textsc {clesuasionforgood}上的实验结果确认,提出的模型CFO-NET可有效地将M-F1的性能从61.74提高到65.41。
translated by 谷歌翻译
Emotional support conversation aims at reducing the emotional distress of the help-seeker, which is a new and challenging task. It requires the system to explore the cause of help-seeker's emotional distress and understand their psychological intention to provide supportive responses. However, existing methods mainly focus on the sequential contextual information, ignoring the hierarchical relationships with the global cause and local psychological intention behind conversations, thus leads to a weak ability of emotional support. In this paper, we propose a Global-to-Local Hierarchical Graph Network to capture the multi-source information (global cause, local intentions and dialog history) and model hierarchical relationships between them, which consists of a multi-source encoder, a hierarchical graph reasoner, and a global-guide decoder. Furthermore, a novel training objective is designed to monitor semantic information of the global cause. Experimental results on the emotional support conversation dataset, ESConv, confirm that the proposed GLHG has achieved the state-of-the-art performance on the automatic and human evaluations. The code will be released in here \footnote{\small{~https://github.com/pengwei-iie/GLHG}}.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译