随着用户生成的在线视频的扩散,多模式情感分析(MSA)最近引起了越来越多的关注。尽管取得了重大进展,但在稳健的MSA方面仍然存在两个主要挑战:1)在未对准的多模式数据中对跨模式相互作用进行建模时效率低下; 2)通常在现实设置中出现的随机模态特征的脆弱性。在本文中,我们提出了一个通用和统一的框架来解决它们,以双级特征恢复(EMT-DLFR)为有效的多模式变压器。具体而言,EMT采用了从每种模式的语音级表示作为全球多模式上下文,以与局部单峰特征相互作用并相互促进。它不仅避免了以前本地局部跨模式相互作用方法的二次缩放成本,而且还可以提高性能。一方面,为了提高模型鲁棒性,DLFR执行低级功能重建,以隐式鼓励模型从不完整的数据中学习语义信息。另一方面,它是一种创新的,将完整的数据视为一个样本的两个不同视图,并利用暹罗代表学学习明确吸引其高级表示。在三个流行数据集上进行的全面实验表明,我们的方法在完整和不完整的模态设置中都能达到卓越的性能。
translated by 谷歌翻译
人类通过不同的渠道表达感受或情绪。以语言为例,它在不同的视觉声学上下文下需要不同的情绪。为了精确了解人类意图,并减少歧义和讽刺引起的误解,我们应该考虑多式联路信号,包括文本,视觉和声学信号。至关重要的挑战是融合不同的特征模式以进行情绪分析。为了有效地融合不同的方式携带的信息,更好地预测情绪,我们设计了一种基于新的多主题的融合网络,这是由任何两个对方式之间的相互作用不同的观察来启发,它们是不同的,并且它们不同样有助于最终的情绪预测。通过分配具有合理关注和利用残余结构的声学 - 视觉,声学 - 文本和视觉文本特征,我们参加了重要的特征。我们对四个公共多模式数据集进行了广泛的实验,包括中文和三种英文中的一个。结果表明,我们的方法优于现有的方法,并可以解释双模相互作用在多种模式中的贡献。
translated by 谷歌翻译
学习模当融合的表示和处理未对准的多模式序列在多式联情绪识别中是有意义的,具有挑战性。现有方法使用定向成对注意力或消息中心到熔丝语言,视觉和音频模态。然而,这些方法在融合特征时介绍信息冗余,并且在不考虑方式的互补性的情况下效率低效。在本文中,我们提出了一种高效的神经网络,以学习与CB变压器(LMR-CBT)的模型融合表示,用于从未对准的多模式序列进行多峰情绪识别。具体地,我们首先为三种方式执行特征提取,以获得序列的局部结构。然后,我们设计具有跨模块块(CB变压器)的新型变压器,其能够实现不同模式的互补学习,主要分为局部时间学习,跨模型特征融合和全球自我关注表示。此外,我们将融合功能与原始特征拼接以对序列的情绪进行分类。最后,我们在三个具有挑战性的数据集,IEMocap,CMU-MOSI和CMU-MOSEI进行词语对齐和未对准的实验。实验结果表明我们在两个设置中提出的方法的优势和效率。与主流方法相比,我们的方法以最小数量的参数达到最先进的。
translated by 谷歌翻译
Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors. However, two major challenges in modeling such multimodal human language time-series data exist: 1) inherent data non-alignment due to variable sampling rates for the sequences from each modality; and 2) long-range dependencies between elements across modalities. In this paper, we introduce the Multimodal Transformer (MulT) to generically address the above issues in an end-to-end manner without explicitly aligning the data. At the heart of our model is the directional pairwise crossmodal attention, which attends to interactions between multimodal sequences across distinct time steps and latently adapt streams from one modality to another. Comprehensive experiments on both aligned and non-aligned multimodal time-series show that our model outperforms state-of-the-art methods by a large margin. In addition, empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed crossmodal attention mechanism in MulT.
translated by 谷歌翻译
关于多模式情绪识别的最新作品转向端到端模型,该模型可以提取与两阶段管道相比,目标任务监督的特定任务特征。但是,以前的方法仅模拟文本和声学和视觉方式之间的特征相互作用,而忽略了捕获声学和视觉方式之间的特征相互作用。在本文中,我们提出了多模式的端到端变压器(ME2ET),该变压器可以有效地对低级和高级水平的文本,声学和视觉方式之间的三模式特征进行建模。在低水平,我们提出了进行性三模式的注意,可以通过采用两次通行策略来对三模式特征相互作用进行建模,并可以进一步利用这种相互作用,以通过降低输入令牌来显着降低计算和记忆复杂性长度。在高水平上,我们引入了三模式特征融合层,以明确汇总三种模式的语义表示。 CMU-MOSEI和IEMOCAP数据集的实验结果表明,ME2ET实现了最新性能。进一步的深入分析证明了拟议的渐进三模式关注的有效性,效率和解释性,这可以帮助我们的模型实现更好的性能,同时显着降低计算和记忆成本。我们的代码将公开可用。
translated by 谷歌翻译
视频中的多模式情感分析是许多现实世界应用中的关键任务,通常需要集成多模式流,包括视觉,言语和声学行为。为了提高多模式融合的鲁棒性,某些现有方法使不同的模态相互通信,并通过变压器模态跨模式相互作用。但是,这些方法仅在交互期间使用单尺度表示,但忘记利用包含不同语义信息级别的多尺度表示。结果,对于未对齐的多模式数据,变压器学到的表示形式可能会偏差。在本文中,我们提出了多模式情感分析的多尺度合作多模式变压器(MCMULT)体系结构。总体而言,“多尺度”机制能够利用每种模式的不同语义信息级别,用于细粒度的跨模式相互作用。同时,每种模式通过从其源模式的多个级别特征集成了交叉模式的交互来学习其特征层次结构。这样,每对方式分别以合作的方式逐步构建特征层次结构。经验结果表明,我们的MCMULT模型不仅在未对齐的多模式序列上胜过现有的方法,而且在对齐的多模式序列上具有强烈的性能。
translated by 谷歌翻译
融合技术是多模式情绪分析中的关键研究主题。最近的关注的融合表明了基于简单的操作融合的进步。然而,这些融合作品采用单规模,即令牌级或话语水平,单峰代表。这种单尺度融合是次优,因为不同的模态应该以不同的粒度对齐。本文提出了名为Scalevlad的融合模型,从文本,视频和音频中收集多尺度表示,与本地聚合描述符的共享向量,以改善未对准的多模式情绪分析。这些共享向量可以被视为共享主题以对齐不同的模态。此外,我们提出了一种自我监督的移位聚类损失,以保持样本之间的融合特征差异化。底部是对应于三个模态的三个变压器编码器,并且从融合模块产生的聚合特征将馈送到变压器加上完成任务预测的完全连接。在三个流行的情感分析基准,IEMocap,MOSI和MOSEI的实验,证明了基准的显着收益。
translated by 谷歌翻译
Modality representation learning is an important problem for multimodal sentiment analysis (MSA), since the highly distinguishable representations can contribute to improving the analysis effect. Previous works of MSA have usually focused on multimodal fusion strategies, and the deep study of modal representation learning was given less attention. Recently, contrastive learning has been confirmed effective at endowing the learned representation with stronger discriminate ability. Inspired by this, we explore the improvement approaches of modality representation with contrastive learning in this study. To this end, we devise a three-stages framework with multi-view contrastive learning to refine representations for the specific objectives. At the first stage, for the improvement of unimodal representations, we employ the supervised contrastive learning to pull samples within the same class together while the other samples are pushed apart. At the second stage, a self-supervised contrastive learning is designed for the improvement of the distilled unimodal representations after cross-modal interaction. At last, we leverage again the supervised contrastive learning to enhance the fused multimodal representation. After all the contrast trainings, we next achieve the classification task based on frozen representations. We conduct experiments on three open datasets, and results show the advance of our model.
translated by 谷歌翻译
Humans are sophisticated at reading interlocutors' emotions from multimodal signals, such as speech contents, voice tones and facial expressions. However, machines might struggle to understand various emotions due to the difficulty of effectively decoding emotions from the complex interactions between multimodal signals. In this paper, we propose a multimodal emotion analysis framework, InterMulti, to capture complex multimodal interactions from different views and identify emotions from multimodal signals. Our proposed framework decomposes signals of different modalities into three kinds of multimodal interaction representations, including a modality-full interaction representation, a modality-shared interaction representation, and three modality-specific interaction representations. Additionally, to balance the contribution of different modalities and learn a more informative latent interaction representation, we developed a novel Text-dominated Hierarchical High-order Fusion(THHF) module. THHF module reasonably integrates the above three kinds of representations into a comprehensive multimodal interaction representation. Extensive experimental results on widely used datasets, (i.e.) MOSEI, MOSI and IEMOCAP, demonstrate that our method outperforms the state-of-the-art.
translated by 谷歌翻译
In vision and linguistics; the main input modalities are facial expressions, speech patterns, and the words uttered. The issue with analysis of any one mode of expression (Visual, Verbal or Vocal) is that lot of contextual information can get lost. This asks researchers to inspect multiple modalities to get a thorough understanding of the cross-modal dependencies and temporal context of the situation to analyze the expression. This work attempts at preserving the long-range dependencies within and across different modalities, which would be bottle-necked by the use of recurrent networks and adds the concept of delta-attention to focus on local differences per modality to capture the idiosyncrasy of different people. We explore a cross-attention fusion technique to get the global view of the emotion expressed through these delta-self-attended modalities, in order to fuse all the local nuances and global context together. The addition of attention is new to the multi-modal fusion field and currently being scrutinized for on what stage the attention mechanism should be used, this work achieves competitive accuracy for overall and per-class classification which is close to the current state-of-the-art with almost half number of parameters.
translated by 谷歌翻译
多模式分类是人类以人为本的机器学习中的核心任务。我们观察到信息跨多模式融合在多模式融合之前,信息在偶像中具有高度互补的信息,因此在多模式融合之前可以彻底稀释。为此,我们呈现稀疏的融合变压器(SFT),一种用于现有最先进的方法的变压器的新型多模式融合方法,同时具有大大降低了内存占用和计算成本。我们想法的关键是稀疏池块,可在跨模式建模之前减少单峰令牌集合。评估在多个多模式基准数据集上进行,用于广泛的分类任务。在类似的实验条件下的多个基准上获得最先进的性能,同时报告计算成本和内存要求降低六倍。广泛的消融研究展示了在天真的方法中结合稀疏和多式化学习的好处。这铺平了在低资源设备上实现多模级学习的方式。
translated by 谷歌翻译
Learning effective joint embedding for cross-modal data has always been a focus in the field of multimodal machine learning. We argue that during multimodal fusion, the generated multimodal embedding may be redundant, and the discriminative unimodal information may be ignored, which often interferes with accurate prediction and leads to a higher risk of overfitting. Moreover, unimodal representations also contain noisy information that negatively influences the learning of cross-modal dynamics. To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations. Specifically, inheriting from the general information bottleneck (IB), MIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target and simultaneously constraining the mutual information between the representation and the input data. Different from general IB, our MIB regularizes both the multimodal and unimodal representations, which is a comprehensive and flexible framework that is compatible with any fusion methods. We develop three MIB variants, namely, early-fusion MIB, late-fusion MIB, and complete MIB, to focus on different perspectives of information constraints. Experimental results suggest that the proposed method reaches state-of-the-art performance on the tasks of multimodal sentiment analysis and multimodal emotion recognition across three widely used datasets. The codes are available at \url{https://github.com/TmacMai/Multimodal-Information-Bottleneck}.
translated by 谷歌翻译
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
translated by 谷歌翻译
Humans are skilled in reading the interlocutor's emotion from multimodal signals, including spoken words, simultaneous speech, and facial expressions. It is still a challenge to effectively decode emotions from the complex interactions of multimodal signals. In this paper, we design three kinds of multimodal latent representations to refine the emotion analysis process and capture complex multimodal interactions from different views, including a intact three-modal integrating representation, a modality-shared representation, and three modality-individual representations. Then, a modality-semantic hierarchical fusion is proposed to reasonably incorporate these representations into a comprehensive interaction representation. The experimental results demonstrate that our EffMulti outperforms the state-of-the-art methods. The compelling performance benefits from its well-designed framework with ease of implementation, lower computing complexity, and less trainable parameters.
translated by 谷歌翻译
主动演讲者的检测和语音增强已成为视听场景中越来越有吸引力的主题。根据它们各自的特征,独立设计的体系结构方案已被广泛用于与每个任务的对应。这可能导致模型特定于任务所学的表示形式,并且不可避免地会导致基于多模式建模的功能缺乏概括能力。最近的研究表明,建立听觉和视觉流之间的跨模式关系是针对视听多任务学习挑战的有前途的解决方案。因此,作为弥合视听任务中多模式关联的动机,提出了一个统一的框架,以通过在本研究中通过联合学习视听模型来实现目标扬声器的检测和语音增强。
translated by 谷歌翻译
多模式情感分析由于其在多模式相互作用中的信息互补性而具有广泛的应用。以前的作品更多地着重于研究有效的联合表示,但他们很少考虑非峰值提取和多模层融合的数据冗余性的不足。在本文中,提出了一个基于视频的跨模式辅助网络(VCAN),该网络由音频特征映射模块和跨模式选择模块组成。第一个模块旨在大大提高音频功能提取的特征多样性,旨在通过提供更全面的声学表示来提高分类精度。为了授权该模型处理冗余视觉功能,第二个模块是在集成视听数据时有效地过滤冗余视觉框架的。此外,引入了由几个图像分类网络组成的分类器组,以预测情感极性和情感类别。关于RAVDESS,CMU-MOSI和CMU-MOSEI基准的广泛实验结果表明,VCAN明显优于提高多模式情感分析的分类准确性的最新方法。
translated by 谷歌翻译
在多模式情绪分析(MSA)中,模型的性能高度取决于合成嵌入的质量。这些嵌入来自称为多模式融合的上游进程,旨在提取并结合输入的单向原始数据以产生更丰富的多峰表示。以前的工作要么返回传播任务丢失或操纵要素空间的几何属性,以产生有利的融合结果,它忽略了从输入到融合结果的关键任务相关信息的保存。在这项工作中,我们提出了一个名为多模式InfoMax(MMIM)的框架,该框架是分层地最大化单向输入对(互别级别)的互信息(MI),以及多模式融合结果和单向输入之间以便通过多模式融合。该框架与主要任务(MSA)共同培训,以提高下游MSA任务的性能。为了解决MI界限的棘手问题,我们进一步制定了一组计算简单的参数和非参数方法来近似于其真实值。两个广泛使用的数据集上的实验结果表明了我们的方法的功效。此工作的实施是公开可用的,在https://github.com/declare-lab/multimodal-infomax上。
translated by 谷歌翻译
多模式情感分析是一项重要的研究任务,可以根据特定意见视频的不同模式数据来预测情绪得分。以前的许多研究都证明了利用不同模式的共享和独特信息的重要性。但是,来自多模式数据的高阶组合信号也将有助于提取满足表示形式。在本文中,我们提出了CMGA,这是MSA的跨模式门控注意融合模型,倾向于在不同的模态对上进行足够的相互作用。CMGA还添加了一个忘记的门来过滤交互过程中引入的嘈杂和冗余信号。我们在MSA,MOSI和MOSEI的两个基准数据集上进行了实验,以说明CMGA在几种基线模型上的性能。我们还进行了消融研究,以证明CMGA内部不同组件的功能。
translated by 谷歌翻译
多模式情感分析和抑郁估计是两个重要的研究主题,旨在使用多模式数据预测人类精神状态。先前的研究重点是制定有效的融合策略,以交换和整合不同模式的与思想有关的信息。一些基于MLP的技术最近在各种计算机视觉任务中取得了巨大的成功。受到这一点的启发,我们探索了本研究中具有混合视角的多模式方法。为此,我们介绍了完全基于MLP的多模式特征处理框架CubeMLP。 CUBEMLP由三个独立的MLP单元组成,每个单元都有两个仿射转换。 CUBEMLP接受所有相关的模态特征作为输入,并在三个轴上混合它们。使用CubeMLP提取特性后,将混合的多模式特征扁平以进行任务预测。我们的实验是在情感分析数据集上进行的:CMU-MOSI和CMU-MOSEI,以及抑郁估计数据集:AVEC2019。结果表明,CUBEMLP可以以低得多的计算成本来实现最先进的性能。
translated by 谷歌翻译
本文研究了时间句子接地的多媒体问题(TSG),该问题旨在根据给定的句子查询准确地确定未修剪视频中的特定视频段。传统的TSG方法主要遵循自上而下或自下而上的框架,不是端到端。他们严重依靠耗时的后处理来完善接地结果。最近,提出了一些基于变压器的方法来有效地对视频和查询之间的细粒语义对齐进行建模。尽管这些方法在一定程度上达到了显着的性能,但它们同样将视频的框架和查询的单词视为用于关联的变压器输入,未能捕获其不同水平的粒度与独特的语义。为了解决这个问题,在本文中,我们提出了一种新型的等级局部 - 全球变压器(HLGT)来利用这种层次结构信息,并模拟不同粒度的不同级别的相互作用和不同的模态之间的相互作用,以学习更多细粒度的多模式表示。具体而言,我们首先将视频和查询分为单个剪辑和短语,以通过时间变压器学习其本地上下文(相邻依赖关系)和全局相关性(远程依赖)。然后,引入了全球本地变压器,以了解本地级别和全球级别语义之间的相互作用,以提供更好的多模式推理。此外,我们开发了一种新的跨模式周期一致性损失,以在两种模式之间实施相互作用,并鼓励它们之间的语义一致性。最后,我们设计了一个全新的跨模式平行变压器解码器,以集成编码的视觉和文本特征,以进行最终接地。在三个具有挑战性的数据集上进行了广泛的实验表明,我们提出的HLGT实现了新的最新性能。
translated by 谷歌翻译