视频框架插值〜(VFI)算法近年来由于数据驱动算法及其实现的前所未有的进展,近年来有了显着改善。最近的研究引入了高级运动估计或新颖的扭曲方法,以解决具有挑战性的VFI方案。但是,没有发表的VFI作品认为插值误差(IE)的空间不均匀特征。这项工作引入了这样的解决方案。通过密切检查光流与IE之间的相关性,本文提出了新的错误预测指标,该指标将中间框架分为与不同IE水平相对应的不同区域。它基于IE驱动的分割,并通过使用新颖的错误控制损耗函数,引入了一组空间自适应插值单元的合奏,该单元逐步处理并集成了分段区域。这种空间合奏会产生有效且具有诱人的VFI解决方案。对流行视频插值基准测试的广泛实验表明,所提出的解决方案在当前兴趣的应用中优于当前最新(SOTA)。
translated by 谷歌翻译
很少有人提出了几乎没有阶级的课程学习(FSCIL),目的是使深度学习系统能够逐步学习有限的数据。最近,一位先驱声称,通常使用的基于重播的课堂学习方法(CIL)是无效的,因此对于FSCIL而言并不是首选。如果真理,这对FSCIL领域产生了重大影响。在本文中,我们通过经验结果表明,采用数据重播非常有利。但是,存储和重播旧数据可能会导致隐私问题。为了解决此问题,我们或建议使用无数据重播,该重播可以通过发电机综合数据而无需访问真实数据。在观察知识蒸馏的不确定数据的有效性时,我们在发电机培训中强加了熵正则化,以鼓励更不确定的例子。此外,我们建议使用单速样标签重新标记生成的数据。这种修改使网络可以通过完全减少交叉渗透损失来学习,从而减轻了在常规知识蒸馏方法中平衡不同目标的问题。最后,我们对CIFAR-100,Miniimagenet和Cub-200展示了广泛的实验结果和分析,以证明我们提出的效果。
translated by 谷歌翻译
最近的自我监督视频表示学习方法通​​过探索视频的基本属性,例如探讨了视频的基本属性。速度,时间顺序等。这项工作利用了一个必不可少的视频,\ Texit {视频连续性}的必要性,以获取自我监督表示学习的监督信号。具体而言,我们制定了三个新的连续性相关的借口任务,即连续性理由,不连续的本地化和缺失部分近似,该近似地监督用于视频表示学习的共享骨干。这种自我监督方法被称为连续性感知网络(CPNet),解决了三个任务,并鼓励骨干网络学习本地和长距离的运动和情境表示。它在多个下游任务中优于现有技术,例如动作识别,视频检索和动作定位。另外,视频连续性可以与其他粗粒度视频属性互补,用于表示学习的其他粗粒视频属性,并将所提出的借口任务集成到现有技术中,可以产生很大的性能增益。
translated by 谷歌翻译
Tensor robust principal component analysis (RPCA), which seeks to separate a low-rank tensor from its sparse corruptions, has been crucial in data science and machine learning where tensor structures are becoming more prevalent. While powerful, existing tensor RPCA algorithms can be difficult to use in practice, as their performance can be sensitive to the choice of additional hyperparameters, which are not straightforward to tune. In this paper, we describe a fast and simple self-supervised model for tensor RPCA using deep unfolding by only learning four hyperparameters. Despite its simplicity, our model expunges the need for ground truth labels while maintaining competitive or even greater performance compared to supervised deep unfolding. Furthermore, our model is capable of operating in extreme data-starved scenarios. We demonstrate these claims on a mix of synthetic data and real-world tasks, comparing performance against previously studied supervised deep unfolding methods and Bayesian optimization baselines.
translated by 谷歌翻译
Length extrapolation is a desirable property that permits training a transformer language model on short sequences and retaining similar perplexities when the model is tested on substantially longer sequences. A relative positional embedding mechanism applied on the transformer self-attention matrix, ALiBi, demonstrates the length extrapolation property with the widest usage to date. In this paper, we show that ALiBi surprisingly does not utilize tokens further than the training sequence length, which can be explained by its implicit windowed attention effect that aligns the receptive field during training and testing stages. Inspired by ALiBi and the receptive filed alignment hypothesis, we propose another transformer positional embedding design named~\textbf{Sandwich} that uses longer than training sequence length information, and it is a greatly simplified formulation of the earliest proposed Sinusoidal positional embedding. Finally, we show that both ALiBi and Sandwich enable efficient inference thanks to their implicit windowed attention effect.
translated by 谷歌翻译
Privacy policies provide individuals with information about their rights and how their personal information is handled. Natural language understanding (NLU) technologies can support individuals and practitioners to understand better privacy practices described in lengthy and complex documents. However, existing efforts that use NLU technologies are limited by processing the language in a way exclusive to a single task focusing on certain privacy practices. To this end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE) benchmark, a multi-task benchmark for evaluating the privacy policy language understanding across various tasks. We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training. We demonstrate that domain-specific pre-training offers performance improvements across all tasks. We release the benchmark to encourage future research in this domain.
translated by 谷歌翻译
Well-designed prompts can guide text-to-image models to generate amazing images. However, the performant prompts are often model-specific and misaligned with user input. Instead of laborious human engineering, we propose prompt adaptation, a general framework that automatically adapts original user input to model-preferred prompts. Specifically, we first perform supervised fine-tuning with a pretrained language model on a small collection of manually engineered prompts. Then we use reinforcement learning to explore better prompts. We define a reward function that encourages the policy to generate more aesthetically pleasing images while preserving the original user intentions. Experimental results on Stable Diffusion show that our method outperforms manual prompt engineering in terms of both automatic metrics and human preference ratings. Moreover, reinforcement learning further boosts performance, especially on out-of-domain prompts. The pretrained checkpoints are available at https://aka.ms/promptist. The demo can be found at https://aka.ms/promptist-demo.
translated by 谷歌翻译
Open-retrieval conversational machine reading comprehension (OCMRC) simulates real-life conversational interaction scenes. Machines are required to make a decision of "Yes/No/Inquire" or generate a follow-up question when the decision is "Inquire" based on retrieved rule texts, user scenario, user question, and dialogue history. Recent studies explored the methods to reduce the information gap between decision-making and question generation and thus improve the performance of generation. However, the information gap still exists because these pipeline structures are still limited in decision-making, span extraction, and question rephrasing three stages. Decision-making and generation are reasoning separately, and the entailment reasoning utilized in decision-making is hard to share through all stages. To tackle the above problem, we proposed a novel one-stage end-to-end framework, called Entailment Fused-T5 (EFT), to bridge the information gap between decision-making and generation in a global understanding manner. The extensive experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on the OR-ShARC benchmark.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Image-text retrieval (ITR) is a challenging task in the field of multimodal information processing due to the semantic gap between different modalities. In recent years, researchers have made great progress in exploring the accurate alignment between image and text. However, existing works mainly focus on the fine-grained alignment between image regions and sentence fragments, which ignores the guiding significance of context background information. Actually, integrating the local fine-grained information and global context background information can provide more semantic clues for retrieval. In this paper, we propose a novel Hierarchical Graph Alignment Network (HGAN) for image-text retrieval. First, to capture the comprehensive multimodal features, we construct the feature graphs for the image and text modality respectively. Then, a multi-granularity shared space is established with a designed Multi-granularity Feature Aggregation and Rearrangement (MFAR) module, which enhances the semantic corresponding relations between the local and global information, and obtains more accurate feature representations for the image and text modalities. Finally, the ultimate image and text features are further refined through three-level similarity functions to achieve the hierarchical alignment. To justify the proposed model, we perform extensive experiments on MS-COCO and Flickr30K datasets. Experimental results show that the proposed HGAN outperforms the state-of-the-art methods on both datasets, which demonstrates the effectiveness and superiority of our model.
translated by 谷歌翻译