A primary objective of news articles is to establish the factual record for an event, frequently achieved by conveying both the details of the specified event (i.e., the 5 Ws; Who, What, Where, When and Why regarding the event) and how people reacted to it (i.e., reported statements). However, existing work on news summarization almost exclusively focuses on the event details. In this work, we propose the novel task of summarizing the reactions of different speakers, as expressed by their reported statements, to a given event. To this end, we create a new multi-document summarization benchmark, SUMREN, comprising 745 summaries of reported statements from various public figures obtained from 633 news articles discussing 132 events. We propose an automatic silver training data generation approach for our task, which helps smaller models like BART achieve GPT-3 level performance on this task. Finally, we introduce a pipeline-based framework for summarizing reported speech, which we empirically show to generate summaries that are more abstractive and factual than baseline query-focused summarization approaches.
translated by 谷歌翻译
成功的点云注册依赖于在强大的描述符上建立的准确对应关系。但是,现有的神经描述符要么利用旋转变化的主链,其性能在较大的旋转下下降,要么编码局部几何形状,而局部几何形状不太明显。为了解决这个问题,我们介绍Riga以学习由设计和全球了解的旋转不变的描述符。从稀疏局部区域的点对特征(PPF)中,旋转不变的局部几何形状被编码为几何描述符。随后,全球对3D结构和几何环境的认识都以旋转不变的方式合并。更具体地说,整个框架的3D结构首先由我们的全球PPF签名表示,从中学到了结构描述符,以帮助几何描述符感知本地区域以外的3D世界。然后将整个场景的几何上下文全局汇总到描述符中。最后,将稀疏区域的描述插值到密集的点描述符,从中提取对应关系进行注册。为了验证我们的方法,我们对对象和场景级数据进行了广泛的实验。在旋转较大的情况下,Riga就模型Net40的相对旋转误差而超过了最先进的方法8 \度,并将特征匹配的回忆提高了3DLOMATCH上的至少5个百分点。
translated by 谷歌翻译
视频时间基础(VTG)的目标是根据自然语言(NL)描述在未修剪视频中定位时间矩。由于现实世界的应用程序提供了永无止境的视频流,因此它提出了对长形视频的时间基础的需求,这导致了两个主要挑战:(1)长视频长度使得很难处理整个视频而不减少样本速率并导致高计算负担; (2)随着候选时间的增加数量,准确的多模式对准更具挑战性。为了应对这些挑战,我们提出了一个有效的以窗户为中心的粗略对齐框架,它可以灵活地处理具有较高推理速度的长格式视频输入,并通过我们的新颖的Choce-Fine Muly-Fine增强了时间基础模态对齐框架。具体来说,我们通过滑动窗口方法将长视频将长视频切成候选窗口。 Cone(1)以窗户为中心,通过对比度学习和通过对NL查询相关的候选窗口进行过滤来学习窗口间的(粗粒)语义差异,并且(2)执行内部(罚款) - 使用强大的对比视力文本预训练模型的强大多模式对齐能力对候选力矩进行排名。长期视频的两个大规模VTG基准测试的广泛实验始终显示出可观的性能增长(MAD的3.13%至6.87%,从10.46%到EGO4D-NLQ上的10.46%至13.46%),并且Cone在两个数据集上都可以达到SOTA结果。分析揭示了组件的有效性和长期视频接地的效率较高,因为我们的系统在EGO4D-NLQ上提高了2倍的推理速度,而在MAD上提高了15倍的速度,同时保持了锥体的SOTA性能。
translated by 谷歌翻译
面向目标的生成脚本学习旨在根据目标生成后续步骤,这是帮助机器人进行日常生活的刻板印象活动的重要任务。我们表明,如果历史状态不仅被给人的语言指示捕获,而且还可以增强随附图像提供的其他信息,可以提高此任务的性能。因此,我们提出了一项新任务,多媒体生成脚本学习,以通过跟踪文本和视觉方式中的历史状态,并介绍包含2,338个任务和31,496个步骤的第一个基准,从而生成后续步骤。我们旨在生成视觉状态的脚本,这些脚本是可跟踪的,对看不见的任务的诱导性,并且在各自的步骤中多样化。我们建议通过多媒体选择性编码器编码视觉状态更改,并使用检索仪的解码器从先前观察到的任务中转移知识,并通过优化面向多样性的对比度学习目标来在每个步骤中介绍不同的信息。我们定义指标以评估发电质量和电感质量。实验结果表明,我们的方法明显优于强质基线。
translated by 谷歌翻译
很少有课堂学习(FSCIL)旨在仅用几个样本不断学习新概念,这很容易遭受灾难性的遗忘和过度拟合的问题。旧阶级的无法获得性和新颖样本的稀缺性使实现保留旧知识和学习新颖概念之间的权衡很大。受到不同模型的启发,我们在学习新颖概念时记住了不同的知识,我们提出了一个记忆的补充网络(MCNET),以整合多个模型,以在新任务中相互补充不同的记忆知识。此外,为了用很少的新样本更新模型,我们开发了一个原型平滑的硬矿三元组(PSHT)损失,以将新型样品不仅在当前任务中彼此远离,而且在旧分布中脱颖而出。在三个基准数据集(例如CIFAR100,Miniimagenet和Cub200)上进行了广泛的实验,证明了我们提出的方法的优势。
translated by 谷歌翻译
了解单个图像的3D场景是各种任务的基础,例如用于机器人,运动规划或增强现实。来自单个RGB图像的3D感知的现有工作倾向于专注于几何重建,或用语义分割或实例分割的几何重建。受到2D Panoptic分割的启发,我们建议统一几何重建,3D语义分割和3D实例分段的任务,进入Panoptic 3D场景重建的任务 - 从单个RGB图像预测相机中场景的完整几何重建图像的截图,以及语义和实例分割。因此,我们为从单个RGB图像提出了一种全新3D场景的新方法,该方法学习从输入图像到达3D容量场景表示来升力和传播2D特征。我们证明,这种联合场景重建,语义和实例分割的整体视图是有益的,独立地处理任务,从而优于替代方法。
translated by 谷歌翻译
3D感知最近的进展在了解3DACHAPES甚至场景的几何结构方面表现出令人印象深刻的进展。灵感来自这些进步的几何理解,我们旨在利用几何约束下学到的表示基于图像的感知。我们介绍一种基于多视图RGB-D数据学习View-Invariant的方法,用于网络预训练的网络预训练的几何感知表示,然后可以将其有效地传送到下游2D任务。我们建议在多视图IM-ysge约束和图像 - 几何约束下采用对比学习,以便在学习的2D表示中进行编码。这不仅仅是在几乎非仅对图像的语义分割,实例分段和对象检测的基于图像的基于图像的基于图像的TASK上学习而改进,而且,但是,在低数据方案中提供了显着的改进。我们对全数据的语义细分显示6.0%的显着提高,以及剪刀上的基线20%数据上的11.9%。
translated by 谷歌翻译
In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput. In this paper, we present a dual encoder model called BagFormer that utilizes a cross modal interaction mechanism to improve recall performance without sacrificing latency and throughput. BagFormer achieves this through the use of bag-wise interactions, which allow for the transformation of text to a more appropriate granularity and the incorporation of entity knowledge into the model. Our experiments demonstrate that BagFormer is able to achieve results comparable to state-of-the-art single encoder models in cross-modal retrieval tasks, while also offering efficient training and inference with 20.72 times lower latency and 25.74 times higher throughput.
translated by 谷歌翻译
The past few years have witnessed the prevalence of self-supervised representation learning within the language and 2D vision communities. However, such advancements have not been fully migrated to the community of 3D point cloud learning. Different from previous pre-training pipelines for 3D point clouds that generally fall into the scope of either generative modeling or contrastive learning, in this paper, we investigate a translative pre-training paradigm, namely PointVST, driven by a novel self-supervised pretext task of cross-modal translation from an input 3D object point cloud to its diverse forms of 2D rendered images (e.g., silhouette, depth, contour). Specifically, we begin with deducing view-conditioned point-wise embeddings via the insertion of the viewpoint indicator, and then adaptively aggregate a view-specific global codeword, which is further fed into the subsequent 2D convolutional translation heads for image generation. We conduct extensive experiments on common task scenarios of 3D shape analysis, where our PointVST shows consistent and prominent performance superiority over current state-of-the-art methods under diverse evaluation protocols. Our code will be made publicly available.
translated by 谷歌翻译
This paper utilizes an anomaly detection algorithm to check if underwater gliders are operating normally in the unknown ocean environment. Glider pilots can be warned of the detected glider anomaly in real time, thus taking over the glider appropriately and avoiding further damage to the glider. The adopted algorithm is validated by two valuable sets of data in real glider deployments, the University of South Florida (USF) glider Stella and the Skidaway Institute of Oceanography (SkIO) glider Angus.
translated by 谷歌翻译