多个现有基准测试涉及视频中的跟踪和分割对象,例如,视频对象细分(VOS)和多对象跟踪和分割(MOTS)(MOTS),但是由于使用不同的基准标准数据集和指标,它们之间几乎没有相互作用(例如J&F,J&F,J&F,J&F,地图,smotsa)。结果,已发表的作品通常针对特定的基准,并且不容易相互媲美。我们认为,可以解决多个任务的广义方法的发展需要在这些研究子社区中更大的凝聚力。在本文中,我们旨在通过提出爆发来促进这一点,该数据集包含数千个带有高质量对象掩码的视频,以及一个相关的基准标准,其中包含六个任务,涉及视频中的对象跟踪和细分。使用相同的数据和可比较的指标对所有任务进行评估,这使研究人员能够一致考虑它们,因此更有效地从不同任务的不同方法中汇集了知识。此外,我们为所有任务展示了几个基线,并证明可以将一个任务的方法应用于另一个任务,并具有可量化且可解释的性能差异。数据集注释和评估代码可在以下网址获得:https://github.com/ali2500/burst-benchmark。
translated by 谷歌翻译
由于其在建模复杂操作方面的性能和灵活性,变压器在计算机视觉中变得普遍。特别重要的是“交叉注意”操作,它允许通过参与任意大小的输入功能集来学习一个向量表示(例如,图像中的对象)。最近,提出了“掩盖注意力”,其中给定的对象表示仅关注那些对象的分割掩码处于活动状态的图像像素功能。这种注意力的专业证明对各种图像和视频细分任务有益。在本文中,我们提出了另一种专业化的注意力,该专业能够通过“软遮罩”(具有连续遮罩概率而不是二进制值的那些软遮罩)参加,并且通过这些掩码概率也可以差异化,从而允许学习掩模用于注意的掩模。在网络中无需直接损失监督。这对于多种应用程序可能很有用。具体而言,我们对弱监督视频对象细分(VOS)的任务采用了“可区分的软掩盖注意力”,在该任务中,我们为VOS开发了一个基于变压器的网络,该网络仅需要单个带注释的图像框架,但也可以仅带有一个带注释的框架的视频中的循环一致性培训受益。尽管没有标记的框架中的口罩没有损失,但由于我们的新型注意力表述,该网络仍然能够在这些框架中细分对象。代码:https://github.com/ali2500/hodor/blob/main/main/hodor/modelling/encoder/soft_masked_attention.py
translated by 谷歌翻译
用于视频对象分割(VOS)的现有最先进方法(VOS)学习帧之间的低级像素到像素对应关系,以在视频中传播对象掩码。这需要大量的密集注释的视频数据,这是昂贵的注释,并且由于视频内的帧是高度相关的,因此由于视频内的帧具有很大冗余。鉴于此,我们提出了HODOR:一种新的方法,通过有效地利用被帮助的静态图像来理解对象外观和场景上下文来解决VOS的新方法。我们将来自图像帧的对象实例和场景信息编码为强大的高级描述符,然后可以用于重新划分不同帧中的这些对象。因此,与没有视频注释培训的现有方法相比,HODOR在DAVIS和YOUTUBE-VOS基准上实现了最先进的性能。如果没有任何架构修改,HODOR也可以通过利用循环一致性围绕单个注释的视频帧周围的视频上下文学习,而其他方法依赖于密集,则时间上一致的注释。
translated by 谷歌翻译
尽管从研究界获得了重大关注,但单眼视频中分段和跟踪对象的任务仍然有很多改进空间。现有工程同时证明了各种图像级分段任务的扩张和可变形卷曲的功效。这使得这种卷积的3D扩展也应该产生视频级分段任务的3D扩展。但是,这方面尚未在现有文献中彻底探讨。在本文中,我们提出了动态扩张卷积(D ^ 2Conv3d):一种新型类型的卷积,其汲取了来自扩张和可变形卷曲的灵感,并将它们延伸到3D(时空)域。我们通过实验表明,D ^ 2CONV3D可用于通过简单地使用D ^ 2CONV3D作为标准卷积的替代品来改进多个视频分段相关基准的多个3D CNN架构的性能。我们进一步表明,D ^ 2CONV3D OUT-upial延伸的现有扩张和可变形卷曲的速度扩展到3D。最后,我们在Davis 2016无监督的视频对象分段基准测试中设置了新的最先进的。代码在https://github.com/schmiddo/d2conv3d上公开提供。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
In this paper, we reduce the complexity of approximating the correlation clustering problem from $O(m\times\left( 2+ \alpha (G) \right)+n)$ to $O(m+n)$ for any given value of $\varepsilon$ for a complete signed graph with $n$ vertices and $m$ positive edges where $\alpha(G)$ is the arboricity of the graph. Our approach gives the same output as the original algorithm and makes it possible to implement the algorithm in a full dynamic setting where edge sign flipping and vertex addition/removal are allowed. Constructing this index costs $O(m)$ memory and $O(m\times\alpha(G))$ time. We also studied the structural properties of the non-agreement measure used in the approximation algorithm. The theoretical results are accompanied by a full set of experiments concerning seven real-world graphs. These results shows superiority of our index-based algorithm to the non-index one by a decrease of %34 in time on average.
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Finding and localizing the conceptual changes in two scenes in terms of the presence or removal of objects in two images belonging to the same scene at different times in special care applications is of great significance. This is mainly due to the fact that addition or removal of important objects for some environments can be harmful. As a result, there is a need to design a program that locates these differences using machine vision. The most important challenge of this problem is the change in lighting conditions and the presence of shadows in the scene. Therefore, the proposed methods must be resistant to these challenges. In this article, a method based on deep convolutional neural networks using transfer learning is introduced, which is trained with an intelligent data synthesis process. The results of this method are tested and presented on the dataset provided for this purpose. It is shown that the presented method is more efficient than other methods and can be used in a variety of real industrial environments.
translated by 谷歌翻译
Simulation-based falsification is a practical testing method to increase confidence that the system will meet safety requirements. Because full-fidelity simulations can be computationally demanding, we investigate the use of simulators with different levels of fidelity. As a first step, we express the overall safety specification in terms of environmental parameters and structure this safety specification as an optimization problem. We propose a multi-fidelity falsification framework using Bayesian optimization, which is able to determine at which level of fidelity we should conduct a safety evaluation in addition to finding possible instances from the environment that cause the system to fail. This method allows us to automatically switch between inexpensive, inaccurate information from a low-fidelity simulator and expensive, accurate information from a high-fidelity simulator in a cost-effective way. Our experiments on various environments in simulation demonstrate that multi-fidelity Bayesian optimization has falsification performance comparable to single-fidelity Bayesian optimization but with much lower cost.
translated by 谷歌翻译