持续学习(CL)被广泛认为是终身AI的关键挑战。但是,现有的CLENG分类,例如置换式和拆分式剪裁,利用人工时间变化,不与现实世界一致或不一致。在本文中,我们介绍了Clear,这是第一个连续的图像分类基准数据集,其在现实世界中具有自然的视觉概念的时间演变,它跨越了十年(2004-2014)。我们通过现有的大规模图像集(YFCC100M)清楚地清楚地通过一种新颖且可扩展的低成本方法来进行粘性语言数据集策划。我们的管道利用了预处理的视觉语言模型(例如剪辑)来互动地构建标记的数据集,这些数据集通过众包进一步验证以删除错误甚至不适当的图像(隐藏在原始YFCC100M中)。在先前的CLENMACK上,明确的主要优势是具有现实世界图像的视觉概念的平滑时间演变,包括每个时间段的高质量标记数据以及丰富的未标记样本,用于连续半惯用的学习。我们发现,一个简单的无监督预训练步骤已经可以提高只能利用完全监督数据的最新CL算法。我们的分析还表明,主流CL评估方案训练和测试IID数据人为膨胀CL系统的性能。为了解决这个问题,我们为CL提出了新颖的“流”协议,该协议始终在(近)未来测试。有趣的是,流媒体协议(a)可以简化数据集策划,因为当今的测试集可以重新用于明天的火车集,并且(b)可以生成更具概括性的模型,具有更准确的性能估算,因为每个时间段的所有标记数据都用于培训和培训,并且测试(与经典的IID火车测试拆分不同)。
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
先前的关于自我监督预训练的研究重点是联合培训方案,在该场景中,假定大量未标记的数据一次性地将其作为输入,只有那时才受过培训的学习者。不幸的是,这种问题设置通常是不切实际的,即使不是不可行的,因为许多现实世界的任务依赖于顺序学习,例如,数据是以流方式分散或收集的。在本文中,我们对通过流数据进行了对自我监督的预训练进行了首次彻底而专门的研究,旨在阐明这种被忽视的设置下的模型行为。具体而言,我们在来自ImageNet和域内的四类预训练流数据数据上预先培训超过500个模型,并在三种类型的下游任务和12个不同的下游数据集上对其进行评估。我们的研究表明,以某种方式超出了我们的期望,通过简单的数据重播或参数正则化,顺序的自我监督预训练的预训练证明是联合预训练的有效替代方法,因为前者的性能主要与这些培训相同后者。此外,灾难性的遗忘是顺序监督学习中的一个常见问题,在顺序的自学学习(SSL)中得到了极大的缓解,这是通过我们对损失景观中最小值的表示和敏锐度的全面经验分析来很好地证明的。因此,我们的发现表明,在实践中,对于SSL,可以主要通过顺序学习来代替繁琐的联合培训,这反过来又可以更广泛的潜在应用方案。
translated by 谷歌翻译
Despite significant advances, the performance of state-of-the-art continual learning approaches hinges on the unrealistic scenario of fully labeled data. In this paper, we tackle this challenge and propose an approach for continual semi-supervised learning -- a setting where not all the data samples are labeled. An underlying issue in this scenario is the model forgetting representations of unlabeled data and overfitting the labeled ones. We leverage the power of nearest-neighbor classifiers to non-linearly partition the feature space and learn a strong representation for the current task, as well as distill relevant information from previous tasks. We perform a thorough experimental evaluation and show that our method outperforms all the existing approaches by large margins, setting a strong state of the art on the continual semi-supervised learning paradigm. For example, on CIFAR100 we surpass several others even when using at least 30 times less supervision (0.8% vs. 25% of annotations).
translated by 谷歌翻译
持续学习背后的主流范例一直在使模型参数调整到非静止数据分布,灾难性遗忘是中央挑战。典型方法在测试时间依赖排练缓冲区或已知的任务标识,以检索学到的知识和地址遗忘,而这项工作呈现了一个新的范例,用于持续学习,旨在训练更加简洁的内存系统而不在测试时间访问任务标识。我们的方法学会动态提示(L2P)预先训练的模型,以在不同的任务转换下顺序地学习任务。在我们提出的框架中,提示是小型可学习参数,这些参数在内存空间中保持。目标是优化提示,以指示模型预测并明确地管理任务不变和任务特定知识,同时保持模型可塑性。我们在流行的图像分类基准下进行全面的实验,具有不同挑战的持续学习环境,其中L2P始终如一地优于现有最先进的方法。令人惊讶的是,即使没有排练缓冲区,L2P即使没有排练缓冲,L2P也能实现竞争力的结果,并直接适用于具有挑战性的任务不可行的持续学习。源代码在https://github.com/google-Research/l2p中获得。
translated by 谷歌翻译
当自我监督的模型已经显示出比在规模上未标记的数据训练的情况下的监督对方的可比视觉表现。然而,它们的功效在持续的学习(CL)场景中灾难性地减少,其中数据被顺序地向模型呈现给模型。在本文中,我们表明,通过添加将表示的当前状态映射到其过去状态,可以通过添加预测的网络来无缝地转换为CL的蒸馏机制。这使我们能够制定一个持续自我监督的视觉表示的框架,学习(i)显着提高了学习象征的质量,(ii)与若干最先进的自我监督目标兼容(III)几乎没有近似参数调整。我们通过在各种CL设置中培训六种受欢迎的自我监督模型来证明我们的方法的有效性。
translated by 谷歌翻译
现代ML方法在培训数据是IID,大规模和良好标记的时候Excel。在不太理想的条件下学习仍然是一个开放的挑战。在不利条件下,几次射击,持续的,转移和代表学习的子场在学习中取得了很大的进步;通过方法和见解,每个都提供了独特的优势。这些方法解决了不同的挑战,例如依次到达的数据或稀缺的训练示例,然而,在部署之前,ML系统将面临困难的条件。因此,需要可以处理实际设置中许多学习挑战的一般ML系统。为了促进一般ML方法目标的研究,我们介绍了一个新的统一评估框架 - 流体(灵活的顺序数据)。流体集成了几次拍摄,持续的,转移和表示学习的目标,同时能够比较和整合这些子场的技术。在流体中,学习者面临数据流,并且必须在选择如何更新自身时进行顺序预测,快速调整到新颖的类别,并处理更改的数据分布;虽然会计计算总额。我们对广泛的方法进行实验,这些方法阐述了新的洞察当前解决方案的优缺点并表明解决了新的研究问题。作为更一般方法的起点,我们展示了两种新的基线,其在流体上优于其他评估的方法。项目页面:https://raivn.cs.washington.edu/projects/fluid/。
translated by 谷歌翻译
大规模预训练的快速开发导致基础模型可以充当各种下游任务和领域的有效提取器。在此激励的情况下,我们研究了预训练的视觉模型的功效,作为下游持续学习(CL)场景的基础。我们的目标是双重的。首先,我们想了解RAW-DATA空间中CL和预训练编码器的潜在空间之间CL之间的计算准确性权衡。其次,我们研究编码器的特征,训练算法和数据以及所得的潜在空间如何影响CL性能。为此,我们将各种预训练的模型在大规模基准测试方案中的功效与在潜在和原始数据空间中应用的香草重播设置的功效。值得注意的是,这项研究表明了转移,遗忘,任务相似性和学习如何取决于输入数据特征,而不一定取决于CL算法。首先,我们表明,在某些情况下,通过可忽略的计算中的非参数分类器可以很容易地实现合理的CL性能。然后,我们展示模型如何在更广泛的数据上进行预训练,从而为各种重播大小提供更好的性能。我们以这些表示形式的代表性相似性和传递属性来解释这一点。最后,与训练域相比,我们显示了自我监督预训练对下游域的有效性。我们指出并验证了几个研究方向,这些方向可以进一步提高潜在CL的功效,包括表示结合。本研究中使用的各种数据集可以用作进一步CL研究的计算效率游乐场。该代码库可在https://github.com/oleksost/latent_cl下获得。
translated by 谷歌翻译
Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores Online Domain-Incremental Continual Segmentation~(ODICS), a real-world problem that arises in many applications, \eg, autonomous driving. In ODICS, the model is continually presented with batches of densely labeled images from different domains; computation is limited and no information about the task boundaries is available. In autonomous driving, this may correspond to the realistic scenario of training a segmentation model over time on a sequence of cities. We analyze several existing continual learning methods and show that they do not perform well in this setting despite working well in class-incremental segmentation. We propose SimCS, a parameter-free method complementary to existing ones that leverages simulated data as a continual learning regularizer. Extensive experiments show consistent improvements over different types of continual learning methods that use regularizers and even replay.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
Many real-world learning scenarios face the challenge of slow concept drift, where data distributions change gradually over time. In this setting, we pose the problem of learning temporally sensitive importance weights for training data, in order to optimize predictive accuracy. We propose a class of temporal reweighting functions that can capture multiple timescales of change in the data, as well as instance-specific characteristics. We formulate a bi-level optimization criterion, and an associated meta-learning algorithm, by which these weights can be learned. In particular, our formulation trains an auxiliary network to output weights as a function of training instances, thereby compactly representing the instance weights. We validate our temporal reweighting scheme on a large real-world dataset of 39M images spread over a 9 year period. Our extensive experiments demonstrate the necessity of instance-based temporal reweighting in the dataset, and achieve significant improvements to classical batch-learning approaches. Further, our proposal easily generalizes to a streaming setting and shows significant gains compared to recent continual learning methods.
translated by 谷歌翻译
Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. 1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.
translated by 谷歌翻译
根据互补学习系统(CLS)理论〜\ cite {mcclelland1995there}在神经科学中,人类通过两个补充系统有效\ emph {持续学习}:一种快速学习系统,以海马为中心,用于海马,以快速学习细节,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验的快速学习, ;以及位于新皮层中的缓慢学习系统,以逐步获取有关环境的结构化知识。在该理论的激励下,我们提出\ emph {dualnets}(对于双网络),这是一个一般的持续学习框架,该框架包括一个快速学习系统,用于监督从特定任务和慢速学习系统中的模式分离代表学习,用于表示任务的慢学习系统 - 不可知论的一般代表通过自我监督学习(SSL)。双网符可以无缝地将两种表示类型纳入整体框架中,以促进在深层神经网络中更好地持续学习。通过广泛的实验,我们在各种持续的学习协议上展示了双网络的有希望的结果,从标准离线,任务感知设置到具有挑战性的在线,无任务的场景。值得注意的是,在Ctrl〜 \ Cite {veniat2020202020202020202020202020202020202020202020202020202020202021- coite {ostapenko2021-continual}的基准中。此外,我们进行了全面的消融研究,以验证双nets功效,鲁棒性和可伸缩性。代码可在\ url {https://github.com/phquang/dualnet}上公开获得。
translated by 谷歌翻译
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community. Recent research efforts have quickly led to the design of novel algorithms able to reduce the impact of the catastrophic forgetting phenomenon in deep neural networks. Due to this surge of interest in the field, many competitions have been held in recent years, as they are an excellent opportunity to stimulate research in promising directions. This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022. The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks. The challenge is based on the challenge version of the novel EgoObjects dataset, a large-scale egocentric object dataset explicitly designed to benchmark continual learning algorithms for egocentric category-/instance-level object understanding, which covers more than 1k unique main objects and 250+ categories in around 100k video frames.
translated by 谷歌翻译
Lifelong learners must recognize concept vocabularies that evolve over time. A common yet underexplored scenario is learning with class labels over time that refine/expand old classes. For example, humans learn to recognize ${\tt dog}$ before dog breeds. In practical settings, dataset $\textit{versioning}$ often introduces refinement to ontologies, such as autonomous vehicle benchmarks that refine a previous ${\tt vehicle}$ class into ${\tt school-bus}$ as autonomous operations expand to new cities. This paper formalizes a protocol for studying the problem of $\textit{Learning with Evolving Class Ontology}$ (LECO). LECO requires learning classifiers in distinct time periods (TPs); each TP introduces a new ontology of "fine" labels that refines old ontologies of "coarse" labels (e.g., dog breeds that refine the previous ${\tt dog}$). LECO explores such questions as whether to annotate new data or relabel the old, how to leverage coarse labels, and whether to finetune the previous TP's model or train from scratch. To answer these questions, we leverage insights from related problems such as class-incremental learning. We validate them under the LECO protocol through the lens of image classification (CIFAR and iNaturalist) and semantic segmentation (Mapillary). Our experiments lead to surprising conclusions; while the current status quo is to relabel existing datasets with new ontologies (such as COCO-to-LVIS or Mapillary1.2-to-2.0), LECO demonstrates that a far better strategy is to annotate $\textit{new}$ data with the new ontology. However, this produces an aggregate dataset with inconsistent old-vs-new labels, complicating learning. To address this challenge, we adopt methods from semi-supervised and partial-label learning. Such strategies can surprisingly be made near-optimal, approaching an "oracle" that learns on the aggregate dataset exhaustively labeled with the newest ontology.
translated by 谷歌翻译
持续学习(CL)调查如何在无需遗忘的情况下培训在任务流上的深网络。文献中提出的CL设置假设每个传入示例都与地面真实注释配对。然而,这与许多真实应用的冲突这项工作探讨了持续的半监督学习(CSSL):这里只有一小部分标记的输入示例显示给学习者。我们评估当前CL方法(例如:EWC,LWF,Icarl,ER,GDumb,Der)在这部小说和具有挑战性的情况下,过度装箱纠缠忘记。随后,我们设计了一种新的CSSL方法,用于在学习时利用度量学习和一致性正则化来利用未标记的示例。我们展示我们的提案对监督越来越令人惊讶的是,我们的提案呈现出更高的恢复能力,甚至更令人惊讶地,仅依赖于25%的监督,以满足全面监督培训的优于营业型SOTA方法。
translated by 谷歌翻译
持续学习旨在使单个模型能够学习一系列任务,而不会造成灾难性的遗忘。表现最好的方法通常需要排练缓冲区来存储过去的原始示例以进行经验重播,但是,由于隐私和内存约束,这会限制其实际价值。在这项工作中,我们提出了一个简单而有效的框架,即DualPrompt,该框架学习了一组称为提示的参数,以正确指示预先训练的模型,以依次学习到达的任务,而不会缓冲过去的示例。 DualPrompt提出了一种新颖的方法,可以将互补提示附加到预训练的主链上,然后将目标提出为学习任务不变和特定于任务的“指令”。通过广泛的实验验证,双启示始终在具有挑战性的课堂开发环境下始终设置最先进的表现。尤其是,双启示的表现优于最近的高级持续学习方法,其缓冲尺寸相对较大。我们还引入了一个更具挑战性的基准Split Imagenet-R,以帮助概括无连续的持续学习研究。源代码可在https://github.com/google-research/l2p上找到。
translated by 谷歌翻译
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems. Instead of tackling continual learning via the use of external memory, growing models, or regularization, EBMs change the underlying training objective to cause less interference with previously learned information. Our proposed version of EBMs for continual learning is simple, efficient, and outperforms baseline methods by a large margin on several benchmarks. Moreover, our proposed contrastive divergence-based training objective can be combined with other continual learning methods, resulting in substantial boosts in their performance. We further show that EBMs are adaptable to a more general continual learning setting where the data distribution changes without the notion of explicitly delineated tasks. These observations point towards EBMs as a useful building block for future continual learning methods.
translated by 谷歌翻译
The ability to dynamically adapt neural networks to newly-available data without performance deterioration would revolutionize deep learning applications. Streaming learning (i.e., learning from one data example at a time) has the potential to enable such real-time adaptation, but current approaches i) freeze a majority of network parameters during streaming and ii) are dependent upon offline, base initialization procedures over large subsets of data, which damages performance and limits applicability. To mitigate these shortcomings, we propose Cold Start Streaming Learning (CSSL), a simple, end-to-end approach for streaming learning with deep networks that uses a combination of replay and data augmentation to avoid catastrophic forgetting. Because CSSL updates all model parameters during streaming, the algorithm is capable of beginning streaming from a random initialization, making base initialization optional. Going further, the algorithm's simplicity allows theoretical convergence guarantees to be derived using analysis of the Neural Tangent Random Feature (NTRF). In experiments, we find that CSSL outperforms existing baselines for streaming learning in experiments on CIFAR100, ImageNet, and Core50 datasets. Additionally, we propose a novel multi-task streaming learning setting and show that CSSL performs favorably in this domain. Put simply, CSSL performs well and demonstrates that the complicated, multi-step training pipelines adopted by most streaming methodologies can be replaced with a simple, end-to-end learning approach without sacrificing performance.
translated by 谷歌翻译