事件检测任务可以帮助人们快速从复杂文本中确定域。它还可以为自然语言处理的下游任务提供强大的支持。存在仅基于大量数据实现固定型学习。当扩展到新课程时,通常需要保留原始数据并重新训练模型。事件检测任务可以终身学习新类,但是大多数现有方法都需要保留大量原始数据或面临灾难性的问题忘记。除此之外,由于缺乏实用性数据,很难获得足够的数据进行模型培训。要解决上述问题,我们在事件检测的领域定义了一项新任务,这是很少的增量事件检测。此任务要求在学习新事件类型的情况下,该模型应保留以前的类型,并且输入有限。我们根据几个event重新创建和发布基准数据集,以少数数量的事件检测任务。我们发布的数据集比该新任务中的其他数据集更合适。此外,我们提出了两种基准方法,即IFSED-K和IFSED-KP,可以以不同的方式解决任务。实验结果表明,我们的方法具有更高的F1分数,并且比基线更稳定。
translated by 谷歌翻译
逐步学习新课程的能力对于所有现实世界的人工智能系统至关重要。像社交媒体,推荐系统,电子商务平台等的大部分高冲击应用都可以由图形模型表示。在本文中,我们调查了挑战但实际问题,图表几次拍摄的类增量(图形FCL)问题,其中图形模型是任务,以对新遇到的类和以前学习的类进行分类。为此目的,我们通过从基类循环地采样任务来提出图形伪增量学习范例,以便为我们的模型产生任意数量的培训集,以练习增量学习技能。此外,我们设计了一种基于分层的图形元学习框架,Hag-Meta。我们介绍了一个任务敏感的常规程序,从任务级关注和节点类原型计算,以缓解到新颖或基本类上的过度拟合。为了采用拓扑知识,我们添加了一个节点级注意模块来调整原型表示。我们的模型不仅达到了旧知识整合的更大稳定性,而且还可以获得对具有非常有限的数据样本的新知识的有利适应性。在三个现实世界数据集上进行广泛的实验,包括亚马逊服装,Reddit和DBLP,表明我们的框架与基线和其他相关最先进的方法相比,展示了显着的优势。
translated by 谷歌翻译
Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model-a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and Im-ageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.
translated by 谷歌翻译
新课程经常出现在我们不断变化的世界中,例如社交媒体中的新兴主题和电子商务中的新产品。模型应识别新的类,同时保持对旧类的可区分性。在严重的情况下,只有有限的新颖实例可以逐步更新模型。在不忘记旧课程的情况下识别几个新课程的任务称为少数类的课程学习(FSCIL)。在这项工作中,我们通过学习多相增量任务(limit)提出了一个基于元学习的FSCIL的新范式,该任务从基本数据集中综合了伪造的FSCIL任务。假任务的数据格式与“真实”的增量任务一致,我们可以通过元学习构建可概括的特征空间。此外,限制还基于变压器构建了一个校准模块,该模块将旧类分类器和新类原型校准为相同的比例,并填补语义间隙。校准模块还可以自适应地将具有设置对集合函数的特定于实例的嵌入方式化。限制有效地适应新课程,同时拒绝忘记旧课程。在三个基准数据集(CIFAR100,Miniimagenet和Cub200)和大规模数据集上进行的实验,即Imagenet ILSVRC2012验证以实现最新性能。
translated by 谷歌翻译
视觉世界中新对象的不断出现对现实世界部署中当前的深度学习方法构成了巨大的挑战。由于稀有性或成本,新任务学习的挑战通常会加剧新类别的数据。在这里,我们探讨了几乎没有类别学习的重要任务(FSCIL)及其极端数据稀缺条件。理想的FSCIL模型都需要在所有类别上表现良好,无论其显示顺序或数据的匮乏。开放式现实世界条件也需要健壮,并可以轻松地适应始终在现场出现的新任务。在本文中,我们首先重新评估当前的任务设置,并为FSCIL任务提出更全面和实用的设置。然后,受到FSCIL和现代面部识别系统目标的相似性的启发,我们提出了我们的方法 - 增强角损失渐进分类或爱丽丝。在爱丽丝(Alice)中,我们建议使用角度损失损失来获得良好的特征。由于所获得的功能不仅需要紧凑,而且还需要足够多样化以维持未来的增量类别的概括,我们进一步讨论了类增强,数据增强和数据平衡如何影响分类性能。在包括CIFAR100,Miniimagenet和Cub200在内的基准数据集上的实验证明了爱丽丝在最新的FSCIL方法上的性能提高。
translated by 谷歌翻译
Many modern computer vision algorithms suffer from two major bottlenecks: scarcity of data and learning new tasks incrementally. While training the model with new batches of data the model looses it's ability to classify the previous data judiciously which is termed as catastrophic forgetting. Conventional methods have tried to mitigate catastrophic forgetting of the previously learned data while the training at the current session has been compromised. The state-of-the-art generative replay based approaches use complicated structures such as generative adversarial network (GAN) to deal with catastrophic forgetting. Additionally, training a GAN with few samples may lead to instability. In this work, we present a novel method to deal with these two major hurdles. Our method identifies a better embedding space with an improved contrasting loss to make classification more robust. Moreover, our approach is able to retain previously acquired knowledge in the embedding space even when trained with new classes. We update previous session class prototypes while training in such a way that it is able to represent the true class mean. This is of prime importance as our classification rule is based on the nearest class mean classification strategy. We have demonstrated our results by showing that the embedding space remains intact after training the model with new classes. We showed that our method preformed better than the existing state-of-the-art algorithms in terms of accuracy across different sessions.
translated by 谷歌翻译
很少有课堂学习(FSCIL)着重于设计学习算法,这些学习算法可以不断地从几个样本中学习一系列新任务,而不会忘记旧任务。困难是,从新任务中进行一系列有限数据的培训会导致严重的过度拟合问题,并导致众所周知的灾难性遗忘问题。现有研究主要利用图像信息,例如存储以前任务的图像知识或限制分类器更新。但是,他们忽略了分析课堂标签的信息丰富且较少的嘈杂文本信息。在这项工作中,我们建议通过采用内存提示来利用标签文本信息。内存提示可以依次学习新数据,同时存储先前的知识。此外,为了优化内存提示而不破坏存储的知识,我们提出了基于刺激的训练策略。它根据图像嵌入刺激(即嵌入元素的分布)来优化内存提示。实验表明,我们提出的方法的表现优于所有先前的最新方法,从而大大减轻了灾难性的遗忘和过度拟合问题。
translated by 谷歌翻译
Graph learning is a popular approach for performing machine learning on graph-structured data. It has revolutionized the machine learning ability to model graph data to address downstream tasks. Its application is wide due to the availability of graph data ranging from all types of networks to information systems. Most graph learning methods assume that the graph is static and its complete structure is known during training. This limits their applicability since they cannot be applied to problems where the underlying graph grows over time and/or new tasks emerge incrementally. Such applications require a lifelong learning approach that can learn the graph continuously and accommodate new information whilst retaining previously learned knowledge. Lifelong learning methods that enable continuous learning in regular domains like images and text cannot be directly applied to continuously evolving graph data, due to its irregular structure. As a result, graph lifelong learning is gaining attention from the research community. This survey paper provides a comprehensive overview of recent advancements in graph lifelong learning, including the categorization of existing methods, and the discussions of potential applications and open research problems.
translated by 谷歌翻译
我们提出了一个零射门学习关系分类(ZSLRC)框架,通过其识别训练数据中不存在的新颖关系的能力来提高最先进的框架。零射击学习方法模仿人类学习和识别新概念的方式,没有先前的知识。为此,ZSLRC使用修改的高级原型网络来利用加权侧(辅助)信息。 ZSLRC的侧面信息是由关键字,名称实体的高度和标签及其同义词构建的。 ZSLRC还包括一个自动高义的提取框架,可直接从Web获取各种名称实体的高型。 ZSLRC提高了最先进的少量学习关系分类方法,依赖于标记的培训数据,因此即使在现实世界方案中也适用于某些关系对相应标记的培训示例。我们在两种公共数据集(NYT和NEREREL)上使用广泛的实验显示结果,并显示ZSLRC显着优于最先进的方法对监督学习,少量学习和零射击学习任务。我们的实验结果还展示了我们所提出的模型的有效性和稳健性。
translated by 谷歌翻译
少量学习(FSL)旨在学习概括到具有有限培训样本的小型课程的模型。最近的作品将FSL推进一个场景,其中还提供了未标记的例子并提出半监督FSL方法。另一种方法还关心基类的性能,除了新颖的外,还建立了增量FSL方案。在本文中,我们在更现实但复杂的环境下概括了上述两个,通过半监督增量少量学习(S2 I-FSL)命名。为了解决任务,我们提出了一种包含两部分的新型范例:(1)一种精心设计的元训练算法,用于减轻由不可靠的伪标签和(2)模型适应机制来减轻基础和新颖类之间的模糊性,以学习歧视特征对于小说类,同时使用少数标记和所有未标记的数据保留基本知识。对标准FSL,半监控FSL,增量FSL的广泛实验,以及第一个构建的S2 I-FSL基准测试证明了我们提出的方法的有效性。
translated by 谷歌翻译
Conventionally, deep neural networks are trained offline, relying on a large dataset prepared in advance. This paradigm is often challenged in real-world applications, e.g. online services that involve continuous streams of incoming data. Recently, incremental learning receives increasing attention, and is considered as a promising solution to the practical challenges mentioned above. However, it has been observed that incremental learning is subject to a fundamental difficulty -catastrophic forgetting, namely adapting a model to new data often results in severe performance degradation on previous tasks or classes. Our study reveals that the imbalance between previous and new data is a crucial cause to this problem. In this work, we develop a new framework for incrementally learning a unified classifier, i.e. a classifier that treats both old and new classes uniformly. Specifically, we incorporate three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance. Experiments show that the proposed method can effectively rebalance the training process, thus obtaining superior performance compared to the existing methods. On CIFAR-100 and ImageNet, our method can reduce the classification errors by more than 6% and 13% respectively, under the incremental setting of 10 phases.
translated by 谷歌翻译
深度学习模型在逐步学习新任务时遭受灾难性遗忘。已经提出了增量学习,以保留旧课程的知识,同时学习识别新课程。一种典型的方法是使用一些示例来避免忘记旧知识。在这种情况下,旧类和新课之间的数据失衡是导致模型性能下降的关键问题。由于数据不平衡,已经设计了几种策略来纠正新类别的偏见。但是,他们在很大程度上依赖于新旧阶层之间偏见关系的假设。因此,它们不适合复杂的现实世界应用。在这项研究中,我们提出了一种假设不足的方法,即多粒性重新平衡(MGRB),以解决此问题。重新平衡方法用于减轻数据不平衡的影响;但是,我们从经验上发现,他们将拟合新的课程。为此,我们进一步设计了一个新颖的多晶正式化项,该项使模型还可以考虑除了重新平衡数据之外的类别的相关性。类层次结构首先是通过将语义或视觉上类似类分组来构建的。然后,多粒性正则化将单热标签向量转换为连续的标签分布,这反映了基于构造的类层次结构的目标类别和其他类之间的关系。因此,该模型可以学习类间的关系信息,这有助于增强新旧课程的学习。公共数据集和现实世界中的故障诊断数据集的实验结果验证了所提出的方法的有效性。
translated by 谷歌翻译
逐渐射击的语义分割(IFSS)目标以逐步扩展模型的能力逐渐扩大了仅由几个样本监督的新图像。但是,在旧课程中学到的特征可能会大大漂移,从而导致灾难性遗忘。此外,很少有针对新课程的像素级细分样本会导致每个学习课程中臭名昭著的过度拟合问题。在本文中,我们明确表示基于类别的语义分割的知识作为类别嵌入和超级类嵌入,前者描述了独家的语义属性,而后者则表示超级类知识作为类共享语义属性。为了解决IFSS问题,我们提出了EHNET,即从两个方面嵌入自适应更高和超级级表示网络。首先,我们提出了一种嵌入自适应的策略,以避免特征漂移,该策略通过超级班级表示保持旧知识,并使用类似课程的方案自适应地更新类别嵌入类别,以涉及在各个会话中学习的新课程。其次,为了抵制很少有培训样本引起的过度拟合问题,通过将所有类别嵌入以进行初始化并与新班级的类别保持一致以进行增强,从而学习了超级班级的嵌入,从而使学会知识有助于学习新知识,从而减轻了绩效绩效的绩效,依赖培训数据量表。值得注意的是,这两种设计为具有足够语义和有限偏见的类提供了表示能力,从而可以执行需要高语义依赖性的分割任务。 Pascal-5i和可可数据集的实验表明,EHNET具有显着优势的新最先进的性能。
translated by 谷歌翻译
人类的持续学习(CL)能力与稳定性与可塑性困境密切相关,描述了人类如何实现持续的学习能力和保存的学习信息。自发育以来,CL的概念始终存在于人工智能(AI)中。本文提出了对CL的全面审查。与之前的评论不同,主要关注CL中的灾难性遗忘现象,本文根据稳定性与可塑性机制的宏观视角来调查CL。类似于生物对应物,“智能”AI代理商应该是I)记住以前学到的信息(信息回流); ii)不断推断新信息(信息浏览:); iii)转移有用的信息(信息转移),以实现高级CL。根据分类学,评估度量,算法,应用以及一些打开问题。我们的主要贡献涉及I)从人工综合情报层面重新检查CL; ii)在CL主题提供详细和广泛的概述; iii)提出一些关于CL潜在发展的新颖思路。
translated by 谷歌翻译
Metric-based meta-learning is one of the de facto standards in few-shot learning. It composes of representation learning and metrics calculation designs. Previous works construct class representations in different ways, varying from mean output embedding to covariance and distributions. However, using embeddings in space lacks expressivity and cannot capture class information robustly, while statistical complex modeling poses difficulty to metric designs. In this work, we use tensor fields (``areas'') to model classes from the geometrical perspective for few-shot learning. We present a simple and effective method, dubbed hypersphere prototypes (HyperProto), where class information is represented by hyperspheres with dynamic sizes with two sets of learnable parameters: the hypersphere's center and the radius. Extending from points to areas, hyperspheres are much more expressive than embeddings. Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere. Following this idea, we also develop two variants of prototypes under other measurements. Extensive experiments and analysis on few-shot learning tasks across NLP and CV and comparison with 20+ competitive baselines demonstrate the effectiveness of our approach.
translated by 谷歌翻译
大多数元学习方法都假设存在于可用于基本知识的情节元学习的一组非常大的标记数据。这与更现实的持续学习范例形成对比,其中数据以包含不相交类的任务的形式逐步到达。在本文中,我们考虑了这个增量元学习(IML)的这个问题,其中类在离散任务中逐步呈现。我们提出了一种方法,我们调用了IML,我们称之为eCISODIC重播蒸馏(ERD),该方法将来自当前任务的类混合到当前任务中,当研究剧集时,来自先前任务的类别示例。然后将这些剧集用于知识蒸馏以最大限度地减少灾难性的遗忘。四个数据集的实验表明ERD超越了最先进的。特别是,在一次挑战的单次次数较挑战,长任务序列增量元学习场景中,我们将IML和联合训练与当前状态的3.5%/ 10.1%/ 13.4%之间的差距降低我们在Diered-ImageNet / Mini-ImageNet / CIFAR100上分别为2.6%/ 2.9%/ 5.0%。
translated by 谷歌翻译
对象检测是计算机视觉和图像处理中的基本任务。基于深度学习的对象探测器非常成功,具有丰富的标记数据。但在现实生活中,它不保证每个对象类别都有足够的标记样本进行培训。当训练数据有限时,这些大型物体探测器易于过度装备。因此,有必要将几次拍摄的学习和零射击学习引入对象检测,这可以将低镜头对象检测命名在一起。低曝光对象检测(LSOD)旨在检测来自少数甚至零标记数据的对象,其分别可以分为几次对象检测(FSOD)和零拍摄对象检测(ZSD)。本文对基于深度学习的FSOD和ZSD进行了全面的调查。首先,本调查将FSOD和ZSD的方法分类为不同的类别,并讨论了它们的利弊。其次,本调查审查了数据集设置和FSOD和ZSD的评估指标,然后分析了在这些基准上的不同方法的性能。最后,本调查讨论了FSOD和ZSD的未来挑战和有希望的方向。
translated by 谷歌翻译
The counting task, which plays a fundamental rule in numerous applications (e.g., crowd counting, traffic statistics), aims to predict the number of objects with various densities. Existing object counting tasks are designed for a single object class. However, it is inevitable to encounter newly coming data with new classes in our real world. We name this scenario as \textit{evolving object counting}. In this paper, we build the first evolving object counting dataset and propose a unified object counting network as the first attempt to address this task. The proposed model consists of two key components: a class-agnostic mask module and a class-increment module. The class-agnostic mask module learns generic object occupation prior via predicting a class-agnostic binary mask (e.g., 1 denotes there exists an object at the considering position in an image and 0 otherwise). The class-increment module is used to handle new coming classes and provides discriminative class guidance for density map prediction. The combined outputs of class-agnostic mask module and image feature extractor are used to predict the final density map. When new classes come, we first add new neural nodes into the last regression and classification layers of this module. Then, instead of retraining the model from scratch, we utilize knowledge distilling to help the model remember what have already learned about previous object classes. We also employ a support sample bank to store a small number of typical training samples of each class, which are used to prevent the model from forgetting key information of old data. With this design, our model can efficiently and effectively adapt to new coming classes while keeping good performance on already seen data without large-scale retraining. Extensive experiments on the collected dataset demonstrate the favorable performance.
translated by 谷歌翻译
本文认为增量少量学习,这需要一个模型,不断识别新类别,只有一些例子。我们的研究表明,现有方法严重遭受灾难性的遗忘,是一个增量学习中的一个众所周知的问题,这是由于少量拍摄设置中的数据稀缺和不平衡而加剧。我们的分析进一步表明,为了防止灾难性的遗忘,需要在原始阶段采取行动 - 基础类别的培训而不是稍后的几秒钟学习会议。因此,我们建议寻找基本训练目标函数的扁平本地最小值,然后在新任务中微调平面区域内的模型参数。通过这种方式,模型可以在保留旧的时有效地学习新类。综合实验结果表明,我们的方法优于所有现有最先进的方法,并且非常接近近似上限。源代码可在https://github.com/moukamisama/f2m上获得。
translated by 谷歌翻译
人类可以不断学习新知识。但是,在学习新任务后,机器学习模型在以前的任务上的性能急剧下降。认知科学指出,类似知识的竞争是遗忘的重要原因。在本文中,我们根据大脑的元学习和关联机制设计了一个用于终身学习的范式。它从两个方面解决了问题:提取知识和记忆知识。首先,我们通过背景攻击破坏样本的背景分布,从而增强了模型以提取每个任务的关键特征。其次,根据增量知识和基础知识之间的相似性,我们设计了增量知识的自适应融合,这有助于模型将能力分配到不同困难的知识。理论上分析了所提出的学习范式可以使不同任务的模型收敛到相同的最优值。提出的方法已在MNIST,CIFAR100,CUB200和ImagEnet100数据集上进行了验证。
translated by 谷歌翻译