In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
The diverse demands of different summarization tasks and their high annotation costs are driving a need for few-shot summarization. However, despite the emergence of many summarization tasks and datasets, the current training paradigm for few-shot summarization systems ignores potentially shareable knowledge in heterogeneous datasets. To this end, we propose \textsc{UniSumm}, a unified few-shot summarization model pre-trained with multiple summarization tasks and can be prefix-tuned to excel at any few-shot summarization datasets. Meanwhile, to better evaluate few-shot summarization systems, under the principles of diversity and robustness, we assemble and publicize a new benchmark \textsc{SummZoo}. It consists of $8$ diverse summarization tasks with multiple sets of few-shot samples for each task, covering both monologue and dialogue domains. Experimental results and ablation studies show that \textsc{UniSumm} outperforms strong baseline systems by a large margin across all tasks in \textsc{SummZoo} under both automatic and human evaluations. We release our code and benchmark at \url{https://github.com/microsoft/UniSumm}.
translated by 谷歌翻译
Controllable summarization allows users to generate customized summaries with specified attributes. However, due to the lack of designated annotations of controlled summaries, existing works have to craft pseudo datasets by adapting generic summarization benchmarks. Furthermore, most research focuses on controlling single attributes individually (e.g., a short summary or a highly abstractive summary) rather than controlling a mix of attributes together (e.g., a short and highly abstractive summary). In this paper, we propose MACSum, the first human-annotated summarization dataset for controlling mixed attributes. It contains source texts from two domains, news articles and dialogues, with human-annotated summaries controlled by five designed attributes (Length, Extractiveness, Specificity, Topic, and Speaker). We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization based on hard prompt tuning and soft prefix tuning. Results and analysis demonstrate that hard prompt models yield the best performance on all metrics and human evaluations. However, mixed-attribute control is still challenging for summarization tasks. Our dataset and code are available at https://github.com/psunlpgroup/MACSum.
translated by 谷歌翻译
域的概括旨在学习一个可以很好地概括在看不见的测试数据集(即分布数据集)上的模型,该数据与培训数据集不同。为了解决计算机视觉中的领域概括,我们将损失景观理论引入该领域。具体而言,我们从损失景观的角度从四个方面(包括骨干,正则化,训练范式和学习率)引起了深度学习模型的概括能力。我们通过进行广泛的消融研究和可视化来验证有关NICO ++,PAC和VLCS数据集的提议理论。此外,我们将该理论应用于ECCV 2022 NICO挑战1,并在不使用任何域不变方法的情况下获得第三名。
translated by 谷歌翻译
需求估计在动态定价中起着重要的作用,在动态定价中,可以通过基于需求曲线最大化收入来获得最佳价格。在在线酒店预订平台中,房间的需求或占用率随着房间类型而变化,随着时间的推移变化,因此获得准确的占用估算是一项挑战。在本文中,我们提出了一种新颖的酒店需求功能,该功能明确地模拟了对占用预测需求需求的价格弹性,并设计了价格弹性预测模型,以了解各种影响因素的动态价格弹性系数。我们的模型由精心设计的弹性学习模块组成,以减轻内生性问题,并在多任务框架中接受培训以解决数据稀疏性。我们在现实世界数据集上进行了全面的实验,并验证方法优于最先进的基准,以实现占用预测和动态定价。
translated by 谷歌翻译
脑电图(EEG)的准确自动分析将在很大程度上有助于临床医生有效监测和诊断患有各种脑部疾病的患者。与使用标记的疾病脑电图数据进行监督的学习相比,可以训练模型以分析特定疾病但无法监测以前看不见的状态,仅基于正常脑电图的异常检测才能检测到新EEG中的任何潜在异常。与现有的异常检测策略不同,这些检测策略在模型开发过程中不考虑任何不可用的异常数据的财产,这里提出了一种面向任务的自我监督学习方法,它可以利用可用的正常脑电图和有关异常EEG的专业知识来培训更有效的EEG随后开发异常检测器的特征提取器。此外,具有较大核的特定两个分支卷积神经网络被设计为特征提取器,因此它可以更容易地提取较大规模和小规模的特征,这些特征通常出现在不可用的异常脑电图中。如三个EEG数据集所示,有效设计和训练的功能提取器已证明能够根据正常数据和未来的新EEG提取更好的特征表示,以根据正常数据和未来的异常检测来开发异常检测器。该代码可在https://github.com/irining/eeg-ad上找到。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
交替的梯度 - 下降 - 上升(Altgda)是一种优化算法,已广泛用于各种机器学习应用中的模型培训,其旨在解决非渗透最小新的优化问题。然而,现有的研究表明,它遭受了非凸起最小值优化中的高计算复杂性。在本文中,我们开发了一种单环和快速Altgda型算法,利用了近端渐变更新和动量加速来解决正常的非透露极限优化问题。通过识别该算法的内在Lyapunov函数,我们证明它会收敛到非凸起最小化优化问题的临界点,并实现了计算复杂度$ \ mathcal {o}(\ kappa ^ {1.5} \ epsilon ^ { - 2} )$,其中$ \ epsilon $是理想的准确度,$ \ kappa $是问题的条件号。这种计算复杂性改善了单环GDA和AltGDA算法的最先进的复杂性(参见表1中的比较摘要)。我们通过对对抗深层学习的实验展示了算法的有效性。
translated by 谷歌翻译
标准梯度下降(GDA) - 型算法只能在非凸极小优化中找到固定点,这比局部minimax点比局部最佳。在这项工作中,我们开发了GDA型算法,这些算法在非convex-rong-concave minimax优化中全球收敛到局部minimax点。我们首先观察到局部最小点等效于某个包膜函数的二阶固定点。然后,受到经典立方正则化算法的启发,我们提出了Cubic-GDA(一种用于查找局部最小值点的立方体规范化的GDA算法),并通过利用其内在潜在功能来提供全面的收敛分析。具体而言,我们以sublinear收敛速率建立了立方GDA与局部最小点的全球收敛。我们进一步分析了在局部梯度显性型非凸几何形状的整个频谱中立方GDA的渐近收敛速率,比标准GDA更快地建立秩序的渐近收敛速率。此外,我们提出了用于大规模最小优化的立方GDA的随机变体,并在随机子采样下表征其样品复杂性。
translated by 谷歌翻译
阅读理解是一个复杂的认知过程,涉及许多人类大脑活动。大量作品研究了在信息检索相关方案中阅读理解的模式和注意力分配。但是,关于阅读理解过程中人脑中发生的事情以及这些认知活动如何影响信息检索过程,知之甚少。此外,随着脑成像技术(例如脑电图(EEG))的进步,几乎可以实时收集大脑信号,并探索是否可以用作反馈来促进信息获取性能。在本文中,我们仔细设计了一项基于实验室的用户研究,以调查阅读理解过程中的大脑活动。我们的发现表明,神经反应随着不同类型的阅读内容而变化,即可以满足用户信息需求和无法无法满足的内容的内容。我们建议在阅读理解过程中以微观时间量表以微观时间量表来支持各种认知活动,例如认知负载,语义主题理解和推论处理。从这些发现中,我们说明了一些有关信息检索任务的见解,例如排名模型构建和界面设计。此外,我们建议有可能检测主动现实世界系统的阅读理解状态。为此,我们为基于脑电图的阅读理解建模(UERCM)提出了一个统一的框架。为了验证其有效性,我们基于脑电图特征进行了大量的实验,以进行两项阅读理解任务:回答句子分类和回答提取。结果表明,通过大脑信号提高两个任务的性能是可行的。
translated by 谷歌翻译