密集的预期旨在预测未来的行为及其持续的持续时间。现有方法依赖于完全标记的数据,即标有所有未来行动及其持续时间的序列。我们仅使用少量全标记的序列呈现(半)弱监督方法,主要是序列,其中仅标记即将到来的动作。为此,我们提出了一个框架,为未来的行动及其持续时间产生伪标签,并通过细化模块自适应地改进它们。仅考虑到即将到来的动作标签作为输入,这些伪标签指南对未来的动作/持续时间预测。我们进一步设计了注意力机制,以预测背景感知的持续时间。早餐和50salads基准测试的实验验证了我们的方法的效率;与完全监督最先进的模型相比,我们竞争甚至。我们将在:https://github.com/zhanghaotong1/wslvideodenseantication提供我们的代码。
translated by 谷歌翻译
我们为时间动作细分任务提供了半监督的学习方法。该任务的目的是在长时间的未修剪程序视频中暂时检测和细分动作,其中只有一小部分视频被密集标记,并且没有标记的大量视频。为此,我们为未标记的数据提出了两个新的损失函数:动作亲和力损失和动作连续性损失。动作亲和力损失通过施加从标记的集合引起的动作先验来指导未标记的样品学习。动作连续性损失强制执行动作的时间连续性,这也提供了框架分类的监督。此外,我们提出了一种自适应边界平滑(ABS)方法,以建立更粗糙的动作边界,以实现更健壮和可靠的学习。在三个基准上评估了拟议的损失函数和ABS。结果表明,它们以较低的标记数据(5%和10%)的数据显着改善了动作细分性能,并获得了与50%标记数据的全面监督相当的结果。此外,当将ABS整合到完全监督的学习中时,ABS成功地提高了性能。
translated by 谷歌翻译
本文在完全和时间戳监督的设置中介绍了通过序列(SEQ2SEQ)翻译序列(SEQ2SEQ)翻译的统一框架。与当前的最新帧级预测方法相反,我们将动作分割视为SEQ2SEQ翻译任务,即将视频帧映射到一系列动作段。我们提出的方法涉及在标准变压器SEQ2SEQ转换模型上进行一系列修改和辅助损失函数,以应对与短输出序列相对的长输入序列,相对较少的视频。我们通过框架损失为编码器合并了一个辅助监督信号,并在隐式持续时间预测中提出了单独的对齐解码器。最后,我们通过提出的约束K-Medoids算法将框架扩展到时间戳监督设置,以生成伪分段。我们提出的框架在完全和时间戳监督的设置上始终如一地表现,在几个数据集上表现优于或竞争的最先进。
translated by 谷歌翻译
Temporal action segmentation tags action labels for every frame in an input untrimmed video containing multiple actions in a sequence. For the task of temporal action segmentation, we propose an encoder-decoder-style architecture named C2F-TCN featuring a "coarse-to-fine" ensemble of decoder outputs. The C2F-TCN framework is enhanced with a novel model agnostic temporal feature augmentation strategy formed by the computationally inexpensive strategy of the stochastic max-pooling of segments. It produces more accurate and well-calibrated supervised results on three benchmark action segmentation datasets. We show that the architecture is flexible for both supervised and representation learning. In line with this, we present a novel unsupervised way to learn frame-wise representation from C2F-TCN. Our unsupervised learning approach hinges on the clustering capabilities of the input features and the formation of multi-resolution features from the decoder's implicit structure. Further, we provide the first semi-supervised temporal action segmentation results by merging representation learning with conventional supervised learning. Our semi-supervised learning scheme, called ``Iterative-Contrastive-Classify (ICC)'', progressively improves in performance with more labeled data. The ICC semi-supervised learning in C2F-TCN, with 40% labeled videos, performs similar to fully supervised counterparts.
translated by 谷歌翻译
时间动作分割对(长)视频序列中的每个帧的动作进行分类。由于框架明智标签的高成本,我们提出了第一种用于时间动作分割的半监督方法。我们对无监督的代表学习铰接,对于时间动作分割,造成独特的挑战。未经目针视频中的操作长度变化,并且具有未知的标签和开始/结束时间。跨视频的行动订购也可能有所不同。我们提出了一种新颖的方式,通过聚类输入特征来学习来自时间卷积网络(TCN)的帧智表示,其中包含增加的时间接近条件和多分辨率相似性。通过与传统的监督学习合并表示学习,我们开发了一个“迭代 - 对比 - 分类(ICC)”半监督学习计划。通过更多标记的数据,ICC逐步提高性能; ICC半监督学习,具有40%标记的视频,执行类似于完全监督的对应物。我们的ICC分别通过{+1.8,+ 5.6,+2.5}%的{+1.8,+ 5.6,+2.5}%分别改善了100%标记的视频。
translated by 谷歌翻译
时间动作细分任务段视频暂时,并预测所有帧的动作标签。充分监督这种细分模型需要密集的框架动作注释,这些注释既昂贵又乏味。这项工作是第一个提出一个组成动作发现(CAD)框架的工作,该框架仅需要视频高级复杂活动标签作为时间动作分割的监督。提出的方法会自动使用活动分类任务发现组成视频动作。具体而言,我们定义了有限数量的潜在作用原型来构建视频级别的双重表示,通过活动分类培训共同学习了这些原型。这种设置赋予我们的方法,可以在多个复杂活动中发现潜在的共享动作。由于缺乏行动水平的监督,我们采用匈牙利匹配算法将潜在的动作原型与地面真理语义类别进行评估联系起来。我们表明,通过高级监督,匈牙利的匹配可以从现有的视频和活动级别扩展到全球水平。全球级别的匹配允许跨活动进行行动共享,这在文献中从未考虑过。广泛的实验表明,我们发现的动作可以帮助执行时间动作细分和活动识别任务。
translated by 谷歌翻译
Recent temporal action segmentation approaches need frame annotations during training to be effective. These annotations are very expensive and time-consuming to obtain. This limits their performances when only limited annotated data is available. In contrast, we can easily collect a large corpus of in-domain unannotated videos by scavenging through the internet. Thus, this paper proposes an approach for the temporal action segmentation task that can simultaneously leverage knowledge from annotated and unannotated video sequences. Our approach uses multi-stream distillation that repeatedly refines and finally combines their frame predictions. Our model also predicts the action order, which is later used as a temporal constraint while estimating frames labels to counter the lack of supervision for unannotated videos. In the end, our evaluation of the proposed approach on two different datasets demonstrates its capability to achieve comparable performance to the full supervision despite limited annotation.
translated by 谷歌翻译
在时间动作细分中,时间戳监督只需要每个视频序列的少数标记帧。对于未标记的框架,以前的作品依靠分配硬标签,并且在微妙的违反注释假设的情况下,性能迅速崩溃。我们提出了一种基于新型的期望最大化方法(EM)方法,该方法利用了未标记框架的标签不确定性,并且足够强大以适应可能的注释误差。有了准确的时间戳注释,我们提出的方法会产生SOTA结果,甚至超过了几个指标和数据集中完全监督的设置。当应用于缺少动作段的时间戳注释时,我们的方法呈现出稳定的性能。为了进一步测试我们的配方稳健性,我们介绍了Skip-Tag监督的新挑战性注释设置。此设置会放松约束,并需要对视频中任何固定数量的随机帧进行注释,从而使其比时间戳监督更灵活,同时保持竞争力。
translated by 谷歌翻译
Surgical phase recognition is a fundamental task in computer-assisted surgery systems. Most existing works are under the supervision of expensive and time-consuming full annotations, which require the surgeons to repeat watching videos to find the precise start and end time for a surgical phase. In this paper, we introduce timestamp supervision for surgical phase recognition to train the models with timestamp annotations, where the surgeons are asked to identify only a single timestamp within the temporal boundary of a phase. This annotation can significantly reduce the manual annotation cost compared to the full annotations. To make full use of such timestamp supervisions, we propose a novel method called uncertainty-aware temporal diffusion (UATD) to generate trustworthy pseudo labels for training. Our proposed UATD is motivated by the property of surgical videos, i.e., the phases are long events consisting of consecutive frames. To be specific, UATD diffuses the single labelled timestamp to its corresponding high confident ( i.e., low uncertainty) neighbour frames in an iterative way. Our study uncovers unique insights of surgical phase recognition with timestamp supervisions: 1) timestamp annotation can reduce 74% annotation time compared with the full annotation, and surgeons tend to annotate those timestamps near the middle of phases; 2) extensive experiments demonstrate that our method can achieve competitive results compared with full supervision methods, while reducing manual annotation cost; 3) less is more in surgical phase recognition, i.e., less but discriminative pseudo labels outperform full but containing ambiguous frames; 4) the proposed UATD can be used as a plug and play method to clean ambiguous labels near boundaries between phases, and improve the performance of the current surgical phase recognition methods.
translated by 谷歌翻译
Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the \emph{Tasty Videos Dataset V2}, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models.
translated by 谷歌翻译
我们旨在了解行动的执行方式并确定微妙的差异,例如“折叠”“轻轻折叠”。为此,我们提出了一种识别跨不同动作的副词的方法。但是,这种细粒度的注释难以获得,其长尾巴性质使得在罕见的动作倡导者组成中识别副词是具有挑战性的。因此,我们的方法使用多副词伪标签使用半监督的学习来利用仅使用动作标签的视频。结合这些伪宇宙的自适应阈值,我们能够有效利用可用的数据,同时解决长尾分布。此外,我们收集了三个现有视频检索数据集的副词注释,这使我们能够介绍在看不见的动作adverb组成和看不见的域中识别副词的新任务。实验证明了我们的方法的有效性,该方法的表现优于识别副词和适合副词识别的半监督作品的先前工作。我们还展示了副词如何关联细粒度的动作。
translated by 谷歌翻译
To balance the annotation labor and the granularity of supervision, single-frame annotation has been introduced in temporal action localization. It provides a rough temporal location for an action but implicitly overstates the supervision from the annotated-frame during training, leading to the confusion between actions and backgrounds, i.e., action incompleteness and background false positives. To tackle the two challenges, in this work, we present the Snippet Classification model and the Dilation-Erosion module. In the Dilation-Erosion module, we expand the potential action segments with a loose criterion to alleviate the problem of action incompleteness and then remove the background from the potential action segments to alleviate the problem of action incompleteness. Relying on the single-frame annotation and the output of the snippet classification, the Dilation-Erosion module mines pseudo snippet-level ground-truth, hard backgrounds and evident backgrounds, which in turn further trains the Snippet Classification model. It forms a cyclic dependency. Furthermore, we propose a new embedding loss to aggregate the features of action instances with the same label and separate the features of actions from backgrounds. Experiments on THUMOS14 and ActivityNet 1.2 validate the effectiveness of the proposed method. Code has been made publicly available (https://github.com/LingJun123/single-frame-TAL).
translated by 谷歌翻译
视频中的时间动作细分最近引起了很多关注。时间戳监督是完成此任务的一种经济高效的方式。为了获得更多信息以优化模型,现有方法生成的伪框架根据分割模型和时间戳注释的输出进行了迭代标签。但是,这种做法可能在训练过程中引入噪声和振荡,并导致性能变性。为了解决这个问题,我们通过引入与分割模型平行的教师模型来帮助稳定模型优化的过程,为时间戳监督的暂时行动细分提出了一个新的框架。教师模型可以看作是分割模型的合奏,有助于抑制噪声并提高伪标签的稳定性。我们进一步引入了一个分段平滑的损失,该损失更加集中和凝聚力,以实现动作实例中预测概率的平稳过渡。三个数据集的实验表明,我们的方法的表现优于最新方法,并且以较低的注释成本与完全监督的方法相当地执行。
translated by 谷歌翻译
由于数据注释的高成本,半监督行动识别是一个具有挑战性的,但重要的任务是。这个问题的常见方法是用伪标签分配未标记的数据,然后将其作为训练中的额外监督。通常在最近的工作中,通过在标记数据上训练模型来获得伪标签,然后使用模型的自信预测来教授自己。在这项工作中,我们提出了一种更有效的伪标签方案,称为跨模型伪标记(CMPL)。具体地,除了主要骨干内,我们还介绍轻量级辅助网络,并要求他们互相预测伪标签。我们观察到,由于其不同的结构偏差,这两种模型倾向于学习来自同一视频剪辑的互补表示。因此,通过利用跨模型预测作为监督,每个模型都可以受益于其对应物。对不同数据分区协议的实验表明我们对现有替代方案框架的重大改进。例如,CMPL在Kinetics-400和UCF-101上实现了17.6 \%$ 17.6 \%$ 25.1 \%$ 25.使用RGB模态和1 \%$标签数据,优于我们的基线模型,FIXMATCT,以$ 9.0 \% $和10.3美元\%$。
translated by 谷歌翻译
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels. Recently, two-stage self-training methods have achieved significant improvements by self-generating pseudo labels and self-refining anomaly scores with these labels. As the pseudo labels play a crucial role, we propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training. Specifically, we first design a multi-head classification module (each head serves as a classifier) with a diversity loss to maximize the distribution differences of predicted pseudo labels across heads. This encourages the generated pseudo labels to cover as many abnormal events as possible. We then devise an iterative uncertainty pseudo label refinement strategy, which improves not only the initial pseudo labels but also the updated ones obtained by the desired classifier in the second stage. Extensive experimental results demonstrate the proposed method performs favorably against state-of-the-art approaches on the UCF-Crime, TAD, and XD-Violence benchmark datasets.
translated by 谷歌翻译
完全监督的显着对象检测(SOD)方法取得了长足的进步,但是这种方法通常依赖大量的像素级注释,这些注释耗时且耗时。在本文中,我们专注于混合标签下的新的弱监督SOD任务,其中监督标签包括传统无监督方法生成的大量粗标签和少量的真实标签。为了解决此任务中标签噪声和数量不平衡问题的问题,我们设计了一个新的管道框架,采用三种复杂的培训策略。在模型框架方面,我们将任务分解为标签细化子任务和显着对象检测子任务,它们相互合作并交替训练。具体而言,R-NET设计为配备有指导和聚合机制的搅拌机的两流编码器模型(BGA),旨在纠正更可靠的伪标签的粗标签,而S-NET是可更换的。由当前R-NET生成的伪标签监督的SOD网络。请注意,我们只需要使用训练有素的S-NET进行测试。此外,为了确保网络培训的有效性和效率,我们设计了三种培训策略,包括替代迭代机制,小组智慧的增量机制和信誉验证机制。五个草皮基准的实验表明,我们的方法在定性和定量上都针对弱监督/无监督/无监督的方法实现了竞争性能。
translated by 谷歌翻译
我们介绍了一种新颖的方法,用于使用时间戳监督进行时间戳分割。我们的主要贡献是图形卷积网络,该网络以端到端方式学习,以利用相邻帧之间的帧功能和连接,以从稀疏的时间戳标签中生成密集的框架标签。然后可以使用生成的密集框架标签来训练分割模型。此外,我们为分割模型和图形卷积模型进行交替学习的框架,该模型首先初始化,然后迭代地完善学习模型。在四个公共数据集上进行了详细的实验,包括50种沙拉,GTEA,早餐和桌面组件,表明我们的方法优于多层感知器基线,同时在时间活动中表现出色或更好地表现出色或更好在时间戳监督下。
translated by 谷歌翻译
虽然现有的语义分割方法实现令人印象深刻的结果,但它们仍然努力将其模型逐步更新,因为新类别被发现。此外,逐个像素注释昂贵且耗时。本文提出了一种新颖的对语义分割学习弱增量学习的框架,旨在学习从廉价和大部分可用的图像级标签进行新课程。与现有的方法相反,需要从下线生成伪标签,我们使用辅助分类器,用图像级标签培训并由分段模型规范化,在线获取伪监督并逐步更新模型。我们通过使用由辅助分类器生成的软标签来应对过程中的内在噪声。我们展示了我们对Pascal VOC和Coco数据集的方法的有效性,表现出离线弱监督方法,并获得了具有全面监督的增量学习方法的结果。
translated by 谷歌翻译
在过去的几年中,在有限的监督下,在不受限制的环境中解释凝视方向一直引起人们的兴趣。由于数据策展和注释问题,将目光估计方法复制到其他平台(例如不受限制的户外或AR/VR)可能会导致性能大幅下降,因为对于模型培训的准确注释数据的可用性不足。在本文中,我们探讨了一个有趣但具有挑战性的凝视估计方法的问题,其标记数据有限。所提出的方法将知识从标记的子集中提炼出具有视觉特征。包括特定身份的外观,凝视轨迹的一致性和运动特征。给定凝视轨迹,该方法仅利用凝视序列的开始和终点的标签信息。提出的方法的扩展进一步减少了标记框架的需求,仅在生成标签的质量下略有下降的起始框架。我们评估了四个基准数据集(Cave,Tabletgaze,MPII和Gaze360)的建议方法以及Web craw的YouTube视频。我们提出的方法将注释工作降低到低至2.67%,对性能的影响很小。表明我们的模型的潜力实现了凝视估计的“野外”设置。
translated by 谷歌翻译
Video action segmentation aims to slice the video into several action segments. Recently, timestamp supervision has received much attention due to lower annotation costs. We find the frames near the boundaries of action segments are in the transition region between two consecutive actions and have unclear semantics, which we call ambiguous intervals. Most existing methods iteratively generate pseudo-labels for all frames in each video to train the segmentation model. However, ambiguous intervals are more likely to be assigned with noisy and incorrect pseudo-labels, which leads to performance degradation. We propose a novel framework to train the model under timestamp supervision including the following two parts. First, pseudo-label ensembling generates pseudo-label sequences with ambiguous intervals, where the frames have no pseudo-labels. Second, iterative clustering iteratively propagates the pseudo-labels to the ambiguous intervals by clustering, and thus updates the pseudo-label sequences to train the model. We further introduce a clustering loss, which encourages the features of frames within the same action segment more compact. Extensive experiments show the effectiveness of our method.
translated by 谷歌翻译