持续深度学习的领域是一个新兴领域,已经取得了很多进步。但是,同时仅根据图像分类的任务进行了大多数方法,这在智能车辆领域无关。直到最近才提出了班级开展语义分割的方法。但是,所有这些方法都是基于某种形式的知识蒸馏。目前,尚未对基于重播的方法进行调查,这些方法通常在连续的环境中用于对象识别。同时,尽管无监督的语义分割的域适应性获得了很多吸引力,但在持续环境中有关域内收入学习的调查并未得到充分研究。因此,我们工作的目的是评估和调整已建立的解决方案,以连续对象识别语义分割任务,并为连续语义分割的任务提供基线方法和评估协议。首先,我们介绍了类和域内的分割的评估协议,并分析了选定的方法。我们表明,语义分割变化的任务的性质在减轻与图像分类相比最有效的方法中最有效。特别是,在课堂学习中,学习知识蒸馏被证明是至关重要的工具,而在域内,学习重播方法是最有效的方法。
translated by 谷歌翻译
语义分割(CSS)的持续学习是一个快速新兴的领域,其中分割模型的功能通过学习新类或新域而逐渐改善。持续学习中的一个核心挑战是克服灾难性遗忘的影响,这是指在模型对新类或领域进行培训后,准确性突然下降了先前学习的任务。在持续分类中,通常通过重播以前任务中的少量样本来克服这种挑战,但是在CSS中很少考虑重播。因此,我们研究了各种重播策略对语义细分的影响,并在类和域内的环境中评估它们。我们的发现表明,在课堂开发环境中,至关重要的是,对于缓冲区中不同类别的不同类别的分布至关重要,以避免对新学习的班级产生偏见。在域内营养设置中,通过从学习特征表示的分布或通过中位熵选择样品来选择缓冲液样品是最有效的。最后,我们观察到,有效的抽样方法有助于减少早期层中的表示形式的变化,这是忘记域内收入学习的主要原因。
translated by 谷歌翻译
语义细分(CISS)的课堂学习学习目前是一个经过深入研究的领域,旨在通过依次学习新的语义类别来更新语义分割模型。 CISS中的一个主要挑战是克服灾难性遗忘的影响,这描述了在模型接受新的一组课程培训之后,先前学习的类的准确性突然下降。尽管在减轻灾难性遗忘方面取得了最新进展,但在CISS中特别遗忘的根本原因尚未得到很好的理解。因此,在一组实验和代表性分析中,我们证明了背景类别的语义转移和对新类别的偏见是忘记CISS的主要原因。此外,我们表明两者都在网络的更深层分类层中表现出来,而模型的早期层没有影响。最后,我们证明了如何利用背景中包含的信息在知识蒸馏和无偏见的跨透镜损失的帮助下有效地减轻两种原因。
translated by 谷歌翻译
深度神经网络在学习新任务时遭受灾难性遗忘的主要限制。在本文中,我们专注于语义细分中的课堂持续学习,其中新类别随着时间的推移,而在未保留以前的训练数据。建议的持续学习方案塑造了潜在的空间来减少遗忘,同时提高了对新型课程的识别。我们的框架是由三种新的组件驱动,我们还毫不费力地结合现有的技术。首先,匹配的原型匹配在旧类上强制执行潜在空间一致性,约束编码器在后续步骤中为先前看到的类生成类似的潜在潜在表示。其次,特征稀疏性允许在潜在空间中腾出空间以容纳新型课程。最后,根据他们的语义,在统一的同时撕裂不同类别的语义,对形成对比的学习。对Pascal VOC2012和ADE20K数据集的广泛评估展示了我们方法的有效性,显着优于最先进的方法。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
Although unsupervised domain adaptation methods have achieved remarkable performance in semantic scene segmentation in visual perception for self-driving cars, these approaches remain impractical in real-world use cases. In practice, the segmentation models may encounter new data that have not been seen yet. Also, the previous data training of segmentation models may be inaccessible due to privacy problems. Therefore, to address these problems, in this work, we propose a Continual Unsupervised Domain Adaptation (CONDA) approach that allows the model to continuously learn and adapt with respect to the presence of the new data. Moreover, our proposed approach is designed without the requirement of accessing previous training data. To avoid the catastrophic forgetting problem and maintain the performance of the segmentation models, we present a novel Bijective Maximum Likelihood loss to impose the constraint of predicted segmentation distribution shifts. The experimental results on the benchmark of continual unsupervised domain adaptation have shown the advanced performance of the proposed CONDA method.
translated by 谷歌翻译
Class-Incremental Learning is a challenging problem in machine learning that aims to extend previously trained neural networks with new classes. This is especially useful if the system is able to classify new objects despite the original training data being unavailable. While the semantic segmentation problem has received less attention than classification, it poses distinct problems and challenges since previous and future target classes can be unlabeled in the images of a single increment. In this case, the background, past and future classes are correlated and there exist a background-shift. In this paper, we address the problem of how to model unlabeled classes while avoiding spurious feature clustering of future uncorrelated classes. We propose to use Evidential Deep Learning to model the evidence of the classes as a Dirichlet distribution. Our method factorizes the problem into a separate foreground class probability, calculated by the expected value of the Dirichlet distribution, and an unknown class (background) probability corresponding to the uncertainty of the estimate. In our novel formulation, the background probability is implicitly modeled, avoiding the feature space clustering that comes from forcing the model to output a high background score for pixels that are not labeled as objects. Experiments on the incremental Pascal VOC, and ADE20k benchmarks show that our method is superior to state-of-the-art, especially when repeatedly learning new classes with increasing number of increments.
translated by 谷歌翻译
无监督的域适应性(UDA)旨在减少训练和测试数据之间的域间隙,并在大多数情况下以离线方式进行。但是,在部署过程中可能会连续且不可预测地发生域的变化(例如,天气变化突然变化)。在这种情况下,深度神经网络见证了准确性的急剧下降,离线适应可能不足以对比。在本文中,我们解决了在线域适应(ONDA)进行语义细分。我们设计了一条可逐步或突然转移的域转移的管道,在多雨和有雾的情况下,我们对其进行了评估。我们的实验表明,我们的框架可以有效地适应部署期间的新域,而不受灾难性遗忘以前的域的影响。
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
近年来,语义细分领域取得了巨大进展。但是,剩下的一个具有挑战性的问题是,细分模型并未推广到看不见的域。为了克服这个问题,要么必须标记大量涵盖整个域的数据,这些域通常在实践中是不可行的,要么应用无监督的域适应性(UDA),仅需要标记为源数据。在这项工作中,我们专注于UDA,并另外解决了适应单个域,而且针对一系列目标域的情况。这需要机制,以防止模型忘记其先前学习的知识。为了使细分模型适应目标域,我们遵循利用轻质样式转移将标记的源图像样式转换为目标域样式的想法,同时保留源内容。为了减轻源和目标域之间的分布移位,模型在第二步中在传输的源图像上进行了微调。现有的轻重量样式转移方法依赖于自适应实例归一化(ADAIN)或傅立叶变换仍然缺乏性能,并且在常见数据增强(例如颜色抖动)上没有显着改善。这样做的原因是,这些方法并不关注特定于区域或类别的差异,而是主要捕获最突出的样式。因此,我们提出了一个简单且轻巧的框架,该框架结合了两个类条件的ADAIN层。为了提取传输层所需的特定类目标矩,我们使用未过滤的伪标签,与真实标签相比,我们表明这是有效的近似值。我们在合成序列上广泛验证了我们的方法(CACE),并进一步提出了由真实域组成的具有挑战性的序列。 CACE在视觉和定量上优于现有方法。
translated by 谷歌翻译
Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores Online Domain-Incremental Continual Segmentation~(ODICS), a real-world problem that arises in many applications, \eg, autonomous driving. In ODICS, the model is continually presented with batches of densely labeled images from different domains; computation is limited and no information about the task boundaries is available. In autonomous driving, this may correspond to the realistic scenario of training a segmentation model over time on a sequence of cities. We analyze several existing continual learning methods and show that they do not perform well in this setting despite working well in class-incremental segmentation. We propose SimCS, a parameter-free method complementary to existing ones that leverages simulated data as a continual learning regularizer. Extensive experiments show consistent improvements over different types of continual learning methods that use regularizers and even replay.
translated by 谷歌翻译
语义分段网络通常在部署期间预先培训并且未更新。因此,如果训练数据的分布偏离机器人操作期间遇到的那个,则通常发生错误分类。我们建议通过将神经网络调整到机器人在部署期间的环境中来缓解此问题,而无需对外监督。利用互补数据表示,通过概率地累积在体积3D地图中的连续2D语义预测来生成监督信号。然后,我们在累积的语义地图的渲染上重新培训网络,有效地解决歧义并通过3D表示来执行多视图一致性。为了在进行网络适应时保留先前学习的知识,我们采用了基于体验重放的持续学习策略。通过广泛的实验评估,我们对Scannet DataSet和RGB-D传感器记录的内部数据显示了对现实世界室内场景的成功适应。与固定的预训练的神经网络相比,我们的方法平均增加了分割性能11.8%,同时有效地保留了从预训练前数据集的知识。
translated by 谷歌翻译
对于图像的语义分割,如果该任务限于一组封闭的类,则最先进的深神经网络(DNN)实现高分性精度。然而,截至目前,DNN具有有限的开放世界能够在开放世界中运行,在那里他们任务是识别属于未知对象的像素,最终逐步学习新颖的类。人类有能力说:我不知道那是什么,但我已经看到了这样的东西。因此,希望以无监督的方式执行这种增量学习任务。我们介绍一种基于视觉相似性群集未知对象的方法。这些集群用于定义新课程,并作为无监督增量学习的培训数据。更确切地说,通过分割质量估计来评估预测语义分割的连接组件。具有低估计预测质量的连接组件是随后聚类的候选者。另外,组件明智的质量评估允许获得可能包含未知对象的图像区域的预测分段掩模。这种掩模的各个像素是伪标记的,然后用于重新训练DNN,即,在不使用由人类产生的地面真理。在我们的实验中,我们证明,在没有访问地面真理甚至几个数据中,DNN的类空间可以由新颖的类扩展,实现了相当大的分割精度。
translated by 谷歌翻译
Despite significant advances, the performance of state-of-the-art continual learning approaches hinges on the unrealistic scenario of fully labeled data. In this paper, we tackle this challenge and propose an approach for continual semi-supervised learning -- a setting where not all the data samples are labeled. An underlying issue in this scenario is the model forgetting representations of unlabeled data and overfitting the labeled ones. We leverage the power of nearest-neighbor classifiers to non-linearly partition the feature space and learn a strong representation for the current task, as well as distill relevant information from previous tasks. We perform a thorough experimental evaluation and show that our method outperforms all the existing approaches by large margins, setting a strong state of the art on the continual semi-supervised learning paradigm. For example, on CIFAR100 we surpass several others even when using at least 30 times less supervision (0.8% vs. 25% of annotations).
translated by 谷歌翻译
最近的自我监督学习方法能够学习高质量的图像表示,并通过监督方法关闭差距。但是,这些方法无法逐步获取新的知识 - 事实上,它们实际上主要仅用为具有IID数据的预训练阶段。在这项工作中,我们在没有额外的记忆或重放的情况下调查持续学习制度的自我监督方法。为防止忘记以前的知识,我们提出了功能正规化的使用。我们将表明,朴素的功能正则化,也称为特征蒸馏,导致可塑性的低可塑性,因此严重限制了连续的学习性能。为了解决这个问题,我们提出了预测的功能正则化,其中一个单独的投影网络确保新学习的特征空间保留了先前的特征空间的信息,同时允许学习新功能。这使我们可以防止在保持学习者的可塑性时忘记。针对应用于自我监督的其他增量学习方法的评估表明我们的方法在不同场景和多个数据集中获得竞争性能。
translated by 谷歌翻译
Continually learning to segment more and more types of image regions is a desired capability for many intelligent systems. However, such continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning. While multiple knowledge distillation strategies originally for continual classification have been well adapted to continual semantic segmentation, they only consider transferring old knowledge based on the outputs from one or more layers of deep fully convolutional networks. Different from existing solutions, this study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements (Eg. pixels or small local regions) within each image which can capture both within-class and between-class knowledge. The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model. Considering that pixels belonging to the same class in each image often share similar visual properties, a class-specific region pooling is applied to provide more efficient relationship information for knowledge transfer. Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
translated by 谷歌翻译
自动驾驶车辆中的环境感知通常严重依赖于深度神经网络(DNN),这些神经网络受到域的转移,导致DNN部署期间的性能大大降低。通常,通过无监督的域适应(UDA)方法解决了此问题,同时在源和目标域数据集上训练了训练,甚至仅以离线方式对目标数据进行训练。在这项工作中,我们进一步将无源的UDA方法扩展到了连续的,因此可以在单一图像的基础上进行语义细分。因此,我们的方法仅需要供应商(在源域中训练)和电流(未标记的目标域)相机图像的预训练模型。我们的方法持续batchNorm适应(CBNA)使用目标域图像以无监督的方式修改了批准层中的源域统计信息,从而在推理过程中可以提高稳定的性能。因此,与现有作品相反,我们的方法可以应用于在部署期间不断地以单位图像改进DNN,而无需访问源数据,而无需算法延迟,并且几乎没有计算开销。我们在各种源/目标域设置中显示了我们方法在语义分割中的一致有效性。代码可在https://github.com/ifnspaml/cbna上找到。
translated by 谷歌翻译
灾难性的遗忘是阻碍在持续学习环境中部署深度学习算法的一个重大问题。已经提出了许多方法来解决灾难性的遗忘问题,在学习新任务时,代理商在旧任务中失去了其旧任务的概括能力。我们提出了一项替代策略,可以通过知识合并(CFA)处理灾难性遗忘,该策略从多个专门从事以前任务的多个异构教师模型中学习了学生网络,并可以应用于当前的离线方法。知识融合过程以单头方式进行,只有选定数量的记忆样本,没有注释。教师和学生不需要共享相同的网络结构,可以使异质任务适应紧凑或稀疏的数据表示。我们将我们的方法与不同策略的竞争基线进行比较,证明了我们的方法的优势。
translated by 谷歌翻译
人类智慧的主食是以不断的方式获取知识的能力。在Stark对比度下,深网络忘记灾难性,而且为此原因,类增量连续学习促进方法的子字段逐步学习一系列任务,将顺序获得的知识混合成综合预测。这项工作旨在评估和克服我们以前提案黑暗体验重播(Der)的陷阱,这是一种简单有效的方法,将排练和知识蒸馏结合在一起。灵感来自于我们的思想不断重写过去的回忆和对未来的期望,我们赋予了我的能力,即我的能力来修改其重播记忆,以欢迎有关过去数据II的新信息II)为学习尚未公开的课程铺平了道路。我们表明,这些策略的应用导致了显着的改进;实际上,得到的方法 - 被称为扩展-DAR(X-DER) - 优于标准基准(如CiFar-100和MiniimAgeNet)的技术状态,并且这里引入了一个新颖的。为了更好地了解,我们进一步提供了广泛的消融研究,以证实并扩展了我们以前研究的结果(例如,在持续学习设置中知识蒸馏和漂流最小值的价值)。
translated by 谷歌翻译
随着生成对冲网络(GANS)的快速进步,综合场景的视觉质量不断改进,包括复杂的城市场景,其中包含自动驾驶的应用。我们在这项工作中解决了一个持续的场景生成设置,其中GAN在不同的域流上培训;理想情况下,学习的模型最终应该能够在所有看到的域中生成新场景。此设置反映了现实生活场景,其中数据在不同时间的不同地方不断获取。在这种持续的设置中,我们的目标是学习零遗忘,即,由于灾难性的遗忘,在早期域内没有综合质量下降。为此,我们介绍了一种新颖的框架,不仅(i)可以在持续培训中实现无缝知识转移,而且(ii)还能以小的开销成本保证零遗忘。虽然更加内存有效,但由于继续学习,我们的模型比较每个域为一个完整模型的蛮力解决方案比较了更好的合成质量。特别是,在极端的低数据制度下,我们的方法通过大幅度大幅优于蛮力。
translated by 谷歌翻译