Modern machine learning suffers from catastrophic forgetting when learning new classes incrementally. The performance dramatically degrades due to the missing data of old classes. Incremental learning methods have been proposed to retain the knowledge acquired from the old classes, by using knowledge distilling and keeping a few exemplars from the old classes. However, these methods struggle to scale up to a large number of classes. We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes. Distinguishing between an increasing number of visually similar classes is particularly challenging, when the training data is unbalanced. We propose a simple and effective method to address this data imbalance issue. We found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model. With two bias parameters, our method performs remarkably well on two large datasets: ImageNet (1000 classes) and MS-Celeb-1M (10000 classes), outperforming the state-of-the-art algorithms by 11.1% and 13.2% respectively.
translated by 谷歌翻译
Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model-a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and Im-ageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.
translated by 谷歌翻译
本文在课堂增量学习中使用视觉变压器(VIT)研究。令人惊讶的是,天真地应用Vit替代卷积神经网络(CNNS)导致性能下降。我们的分析揭示了三个天然使用VIT的问题:(a)vit在课程中较小时具有非常缓慢的会聚,(b)在比CNN的模型中观察到新类的更多偏差,并且(c)适当的学习率Vit太低,无法学习良好的分类器。基于此分析,我们展示了这些问题可以简单地通过使用现有技术来解决:使用卷积杆,平衡FineTuning来纠正偏置,以及分类器的更高学习率。我们的简单解决方案名为Vitil(Vit用于增量学习),为所有三类增量学习设置实现了全新的最先进的保证金,为研究界提供了强大的基线。例如,在ImageNet-1000上,我们的体内体达到69.20%的前1个精度为500个初始类别的15个初始类别,5个增量步骤(每次100个新类),表现优于leulir + dde ​​1.69%。对于10个增量步骤(100个新课程)的更具挑战性的协议,我们的方法优于PODNet 7.27%(65.13%与57.86%)。
translated by 谷歌翻译
在这个不断变化的世界中,必须不断学习新概念的能力。但是,深层神经网络在学习新类别时会遭受灾难性的遗忘。已经提出了许多减轻这种现象的作品,而其中大多数要么属于稳定性困境,要么陷入了过多的计算或储存开销。受到梯度增强算法的启发,以逐渐适应目标模型和上一个合奏模型之间的残差,我们提出了一种新颖的两阶段学习范式寄养物,使该模型能够适应新的类别。具体而言,我们首先动态扩展新模块,以适合原始模型的目标和输出之间的残差。接下来,我们通过有效的蒸馏策略删除冗余参数和特征尺寸,以维护单个骨干模型。我们在不同的设置下验证CIFAR-100和Imagenet-100/1000的方法寄养。实验结果表明,我们的方法实现了最先进的性能。代码可在以下网址获得:https://github.com/g-u-n/eccv22-foster。
translated by 谷歌翻译
深度学习模型在逐步学习新任务时遭受灾难性遗忘。已经提出了增量学习,以保留旧课程的知识,同时学习识别新课程。一种典型的方法是使用一些示例来避免忘记旧知识。在这种情况下,旧类和新课之间的数据失衡是导致模型性能下降的关键问题。由于数据不平衡,已经设计了几种策略来纠正新类别的偏见。但是,他们在很大程度上依赖于新旧阶层之间偏见关系的假设。因此,它们不适合复杂的现实世界应用。在这项研究中,我们提出了一种假设不足的方法,即多粒性重新平衡(MGRB),以解决此问题。重新平衡方法用于减轻数据不平衡的影响;但是,我们从经验上发现,他们将拟合新的课程。为此,我们进一步设计了一个新颖的多晶正式化项,该项使模型还可以考虑除了重新平衡数据之外的类别的相关性。类层次结构首先是通过将语义或视觉上类似类分组来构建的。然后,多粒性正则化将单热标签向量转换为连续的标签分布,这反映了基于构造的类层次结构的目标类别和其他类之间的关系。因此,该模型可以学习类间的关系信息,这有助于增强新旧课程的学习。公共数据集和现实世界中的故障诊断数据集的实验结果验证了所提出的方法的有效性。
translated by 谷歌翻译
我们考虑了类增量学习(CIL)问题,其中学习代理人通过逐步到达的培训数据批次不断学习新课程,并旨在在迄今为止所学的所有课程中很好地预测。问题的主要挑战是灾难性的遗忘,对于基于典范的示例性记忆方法,通常众所周知,遗忘通常是由于分类评分偏差引起的,该分类得分偏差是由于新类和新类之间的数据失衡而注射的旧课(在示例记忆中)。尽管已经提出了几种方法来通过一些其他后处理(例如,得分重新缩放或平衡的微调)来纠正这种分数偏见,但没有对这种偏见的根本原因进行系统分析。为此,我们分析了通过组合所有旧类和新类的输出得分来计算SoftMax概率的主要原因。然后,我们提出了一种新方法,称为分离的软磁性学习(SS-IL),该方法由分离的SoftMax(SS)输出层组成,结合了任务知识蒸馏(TKD)来解决此类偏见。在几个大规模CIL基准数据集的广泛实验结果中,我们通过在没有任何其他后处理的情况下获得更加平衡的预测分数来表明我们的SS-IL实现了强大的最新准确性。
translated by 谷歌翻译
基于正规化的方法有利于缓解类渐进式学习中的灾难性遗忘问题。由于缺乏旧任务图像,如果分类器在新图像上产生类似的输出,它们通常会假设旧知识得到很好的保存。在本文中,我们发现他们的效果很大程度上取决于旧课程的性质:它们在彼此之间容易区分的课程上工作,但可能在更细粒度的群体上失败,例如,男孩和女孩。在SPIRIT中,此类方法将新数据项目投入到完全连接层中的权重向量中跨越的特征空间,对应于旧类。由此产生的预测在细粒度的旧课程上是相似的,因此,新分类器将逐步失去这些课程的歧视能力。为了解决这个问题,我们提出了一种无记忆生成的重播策略,通过直接从旧分类器生成代表性的旧图像并结合新的分类器培训的新数据来保留细粒度的旧阶级特征。为了解决所产生的样本的均化问题,我们还提出了一种分集体损失,使得产生的样品之间的Kullback Leibler(KL)发散。我们的方法最好是通过先前的基于正规化的方法补充,证明是为了易于区分的旧课程有效。我们验证了上述关于CUB-200-2011,CALTECH-101,CIFAR-100和微小想象的设计和见解,并表明我们的策略优于现有的无记忆方法,并具有清晰的保证金。代码可在https://github.com/xmengxin/mfgr获得
translated by 谷歌翻译
Conventionally, deep neural networks are trained offline, relying on a large dataset prepared in advance. This paradigm is often challenged in real-world applications, e.g. online services that involve continuous streams of incoming data. Recently, incremental learning receives increasing attention, and is considered as a promising solution to the practical challenges mentioned above. However, it has been observed that incremental learning is subject to a fundamental difficulty -catastrophic forgetting, namely adapting a model to new data often results in severe performance degradation on previous tasks or classes. Our study reveals that the imbalance between previous and new data is a crucial cause to this problem. In this work, we develop a new framework for incrementally learning a unified classifier, i.e. a classifier that treats both old and new classes uniformly. Specifically, we incorporate three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance. Experiments show that the proposed method can effectively rebalance the training process, thus obtaining superior performance compared to the existing methods. On CIFAR-100 and ImageNet, our method can reduce the classification errors by more than 6% and 13% respectively, under the incremental setting of 10 phases.
translated by 谷歌翻译
The dynamic expansion architecture is becoming popular in class incremental learning, mainly due to its advantages in alleviating catastrophic forgetting. However, task confusion is not well assessed within this framework, e.g., the discrepancy between classes of different tasks is not well learned (i.e., inter-task confusion, ITC), and certain priority is still given to the latest class batch (i.e., old-new confusion, ONC). We empirically validate the side effects of the two types of confusion. Meanwhile, a novel solution called Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks. TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one. It establishes information flow paths at both feature and logit levels, enabling the learning to be aware of old classes. Besides, attention mechanism and classifier re-scoring are applied to generate more fair classification scores. We conduct extensive experiments on CIFAR100 and ImageNet100 datasets. The results demonstrate that TCIL consistently achieves state-of-the-art accuracy. It mitigates both ITC and ONC, while showing advantages in battle with catastrophic forgetting even no rehearsal memory is reserved.
translated by 谷歌翻译
当在新的类或新任务上逐步训练时,深度神经网络易于灾难性遗忘,因为对新数据的适应导致旧课程和任务的性能急剧下降。通过使用小记忆进行排练和知识蒸馏,已证明最近的方法可有效缓解灾难性的遗忘。然而,由于内存的尺寸有限,旧的和新类可用的数据量之间的大不平衡仍然存在,这导致模型的整体精度恶化。为了解决这个问题,我们建议使用平衡的软制跨熵损失,并表明它可以与进出的方法相结合,以便在某些情况下降低培训过程的计算成本,以提高其性能。对竞争的想象,Subimagenet和CiFar100数据集的实验显示了最艺术态度的结果。
translated by 谷歌翻译
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a classincremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively.iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.
translated by 谷歌翻译
在学习新知识时,班级学习学习(CIL)与灾难性遗忘和无数据CIL(DFCIL)的斗争更具挑战性,而无需访问以前学过的课程的培训数据。尽管最近的DFCIL作品介绍了诸如模型反转以合成以前类的数据,但由于合成数据和真实数据之间的严重域间隙,它们无法克服遗忘。为了解决这个问题,本文提出了有关DFCIL的关系引导的代表学习(RRL),称为R-DFCIL。在RRL中,我们引入了关系知识蒸馏,以灵活地将新数据的结构关系从旧模型转移到当前模型。我们的RRL增强DFCIL可以指导当前的模型来学习与以前类的表示更好地兼容的新课程的表示,从而大大减少了在改善可塑性的同时遗忘。为了避免表示和分类器学习之间的相互干扰,我们在RRL期间采用本地分类损失而不是全球分类损失。在RRL之后,分类头将通过全球类平衡的分类损失进行完善,以解决数据不平衡问题,并学习新课程和以前类之间的决策界限。关于CIFAR100,Tiny-Imagenet200和Imagenet100的广泛实验表明,我们的R-DFCIL显着超过了以前的方法,并实现了DFCIL的新最新性能。代码可从https://github.com/jianzhangcs/r-dfcil获得。
translated by 谷歌翻译
在课堂增量学习(CIL)设置中,在每个学习阶段将类别组引入模型。目的是学习到目前为止观察到的所有类别的统一模型表现。鉴于视觉变压器(VIT)在常规分类设置中的最新流行,一个有趣的问题是研究其持续学习行为。在这项工作中,我们为CIL开发了一个伪造的双蒸馏变压器,称为$ \ textrm {d}^3 \ textrm {前} $。提出的模型利用混合嵌套的VIT设计,以确保数据效率和可扩展性对小数据集和大数据集。与最近的基于VIT的CIL方法相反,我们的$ \ textrm {d}^3 \ textrm {前} $在学习新任务并仍然适用于大量增量任务时不会动态扩展其体系结构。 $ \ textrm {d}^3 \ textrm {oft} $的CIL行为的改善归功于VIT设计的两个基本变化。首先,我们将增量学习视为一个长尾分类问题,其中大多数新课程的大多数样本都超过了可用于旧课程的有限范例。为了避免对少数族裔的偏见,我们建议动态调整逻辑,以强调保留与旧任务相关的表示形式。其次,我们建议在学习跨任务进行时保留空间注意图的配置。这有助于减少灾难性遗忘,通过限制模型以将注意力保留到最歧视区域上。 $ \ textrm {d}^3 \ textrm {以前} $在CIFAR-100,MNIST,SVHN和Imagenet数据集的增量版本上获得了有利的结果。
translated by 谷歌翻译
Although recent deep learning-based calibration methods can predict extrinsic and intrinsic camera parameters from a single image, their generalization remains limited by the number and distribution of training data samples. The huge computational and space requirement prevents convolutional neural networks (CNNs) from being implemented in resource-constrained environments. This challenge motivated us to learn a CNN gradually, by training new data while maintaining performance on previously learned data. Our approach builds upon a CNN architecture to automatically estimate camera parameters (focal length, pitch, and roll) using different incremental learning strategies to preserve knowledge when updating the network for new data distributions. Precisely, we adapt four common incremental learning, namely: LwF , iCaRL, LU CIR, and BiC by modifying their loss functions to our regression problem. We evaluate on two datasets containing 299008 indoor and outdoor images. Experiment results were significant and indicated which method was better for the camera calibration estimation.
translated by 谷歌翻译
虽然灾难性遗忘的概念是直截了当的,但缺乏对其原因的研究。在本文中,我们系统地探索并揭示了课堂增量学习中灾难性遗忘的三个原因(CIL)。从代表学习的角度来看,(i)当学习者未能正确对准相同相位数据时,逐步忘记在训练所得和(ii)当学习者混淆当前相数据时发生相互相互混淆上一阶段。从特定于任务特定的角度来看,CIL模型遭受了(iii)分类器偏差的问题。在调查现有策略后,我们观察到缺乏关于如何防止相互局部混淆的研究。要启动对该具体问题的研究,我们提出了一种简单但有效的框架,CIL(C4IL)的对比阶级浓度。我们的框架利用了对比度学习的阶级集中效应,产生了具有更好的级别的紧凑性和阶级间可分离的表示分布。经验上,我们观察到C4IL显着降低了相互相连的概率,并且结果提高了多个数据集的多个CIL设置的性能。
translated by 谷歌翻译
When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.
translated by 谷歌翻译
视觉世界中新对象的不断出现对现实世界部署中当前的深度学习方法构成了巨大的挑战。由于稀有性或成本,新任务学习的挑战通常会加剧新类别的数据。在这里,我们探讨了几乎没有类别学习的重要任务(FSCIL)及其极端数据稀缺条件。理想的FSCIL模型都需要在所有类别上表现良好,无论其显示顺序或数据的匮乏。开放式现实世界条件也需要健壮,并可以轻松地适应始终在现场出现的新任务。在本文中,我们首先重新评估当前的任务设置,并为FSCIL任务提出更全面和实用的设置。然后,受到FSCIL和现代面部识别系统目标的相似性的启发,我们提出了我们的方法 - 增强角损失渐进分类或爱丽丝。在爱丽丝(Alice)中,我们建议使用角度损失损失来获得良好的特征。由于所获得的功能不仅需要紧凑,而且还需要足够多样化以维持未来的增量类别的概括,我们进一步讨论了类增强,数据增强和数据平衡如何影响分类性能。在包括CIFAR100,Miniimagenet和Cub200在内的基准数据集上的实验证明了爱丽丝在最新的FSCIL方法上的性能提高。
translated by 谷歌翻译
Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. Popular incremental learning methods mitigate such forgetting by retaining a subset of previously seen samples and replaying them during the training on subsequent tasks. However, this is not always possible, e.g., due to data protection regulations. In such restricted scenarios, one can employ generative models to replay either artificial images or hidden features to a classifier. In this work, we propose Genifer (GENeratIve FEature-driven image Replay), where a generative model is trained to replay images that must induce the same hidden features as real samples when they are passed through the classifier. Our technique therefore incorporates the benefits of both image and feature replay, i.e.: (1) unlike conventional image replay, our generative model explicitly learns the distribution of features that are relevant for classification; (2) in contrast to feature replay, our entire classifier remains trainable; and (3) we can leverage image-space augmentations, which increase distillation performance while also mitigating overfitting during the training of the generative model. We show that Genifer substantially outperforms the previous state of the art for various settings on the CIFAR-100 and CUB-200 datasets.
translated by 谷歌翻译
The counting task, which plays a fundamental rule in numerous applications (e.g., crowd counting, traffic statistics), aims to predict the number of objects with various densities. Existing object counting tasks are designed for a single object class. However, it is inevitable to encounter newly coming data with new classes in our real world. We name this scenario as \textit{evolving object counting}. In this paper, we build the first evolving object counting dataset and propose a unified object counting network as the first attempt to address this task. The proposed model consists of two key components: a class-agnostic mask module and a class-increment module. The class-agnostic mask module learns generic object occupation prior via predicting a class-agnostic binary mask (e.g., 1 denotes there exists an object at the considering position in an image and 0 otherwise). The class-increment module is used to handle new coming classes and provides discriminative class guidance for density map prediction. The combined outputs of class-agnostic mask module and image feature extractor are used to predict the final density map. When new classes come, we first add new neural nodes into the last regression and classification layers of this module. Then, instead of retraining the model from scratch, we utilize knowledge distilling to help the model remember what have already learned about previous object classes. We also employ a support sample bank to store a small number of typical training samples of each class, which are used to prevent the model from forgetting key information of old data. With this design, our model can efficiently and effectively adapt to new coming classes while keeping good performance on already seen data without large-scale retraining. Extensive experiments on the collected dataset demonstrate the favorable performance.
translated by 谷歌翻译
我们研究了类新型小说类发现的新任务(class-incd),该任务是指在未标记的数据集中发现新型类别的问题,该问题通过利用已在包含脱节的标签数据集上训练的预训练的模型,该模型已受过培训但是相关类别。除了发现新颖的课程外,我们还旨在维护模型识别先前看到的基本类别的能力。受到基于彩排的增量学习方法的启发,在本文中,我们提出了一种新颖的方法,以防止通过共同利用基类功能原型和特征级知识蒸馏来忘记对基础类的过去信息。我们还提出了一种自我训练的聚类策略,该策略同时将新颖的类别簇簇,并为基础和新颖类培训共同分类器。这使得我们的方法能够在课堂内设置中运行。我们的实验以三个共同的基准进行,表明我们的方法显着优于最先进的方法。代码可从https://github.com/oatmealliu/class-incd获得
translated by 谷歌翻译