对比性语言图像预处理(剪辑)受到广泛关注,因为它的学会表示形式可以很好地转移到各种下游任务上。在剪辑训练期间,Infonce目标旨在使正面图像对齐和分开的负面图像对齐。在本文中,我们在此过程中显示了表示分组的效果:Infonce客观间接通过随机出现的模式内锚将语义相似的表示形式组合在一起。我们引入了原型对比度图像预处理(原始的),以提高其效率并提高其针对模态差距的鲁棒性来增强这种分组。具体而言,原始利润在图像和文本空间之间建立了原型级别的歧视,从而有效传输了更高级别的结构知识。我们进一步提出了典型的背部翻译(PBT),以将表示形式分组与表示形式对齐,从而有效地学习了在较大的模态差距下有意义的表示。 PBT还使我们能够以更丰富的先验知识介绍其他外部教师。 ProtoClip通过在线情节培训策略进行了培训,这可以扩展到无限量的数据。结合上述新颖的设计,我们在概念标题上训练原始设计,并获得了 +5.81%的成像网线性探测改进,并且 +2.01%的Imagenet Zero Zero-shot分类改进。代码可在https://github.com/megvii-research/protoclip上找到。
translated by 谷歌翻译
包含多种类型的节点和边缘的异质图在各种领域都普遍存在,包括书目网络,社交媒体和知识图。作为分析异质图的基本任务,相关度量旨在计算不同类型的两个对象之间的相关性,这些对象已在许多应用程序中使用,例如Web搜索,建议和社区检测。大多数现有的相关性措施都集中在对象具有相同类型的均质网络上,并为异质图制定了一些措施,但它们通常需要预定义的元路径。定义有意义的元路径需要大量的领域知识,这在很大程度上限制了其应用,尤其是在诸如知识图之类的图形富含模式的异质图上。最近,图形神经网络(GNN)已被广泛应用于许多图挖掘任务,但尚未用于测量相关性。为了解决上述问题,我们提出了一种基于GNN的新型相关性措施,即GSIM。具体而言,我们首先是理论上分析的,并表明GNN有效地测量图中节点的相关性。然后,我们建议基于上下文路径的图形神经网络(CP-GNN)自动利用异质图中的语义。此外,我们利用CP-GNN来支持任何类型的两个对象之间的相关性度量。广泛的实验表明,GSIM优于现有措施。
translated by 谷歌翻译
奖励设计是增强学习应用的关键部分,其性能在很大程度上取决于奖励信号的效果如何,以及信号评估达到该目标的进度的程度。在许多情况下,环境提供的外部奖励(例如,胜利或丢失游戏)非常稀疏,因此很难直接训练代理商。研究人员通常通过在实践中添加一些辅助奖励来帮助学习代理商。但是,设计辅助奖励通常会转向试用搜索奖励设置,从而产生可接受的结果。在本文中,我们建议通过最大程度地提高哪些预期的累积外部奖励可以最大化,以自动生成目标的固有奖励,以学习代理。为此,我们介绍了动机的概念,该概念捕捉了最大化某些奖励并提出基于动机的奖励设计方法的基本目标。基本思想是通过最大程度地减少内在动机和外在动机之间的距离来塑造内在的奖励。我们进行了广泛的实验,并表明我们的方法在处理延迟奖励,探索和信用分配问题方面的最新方法要好。
translated by 谷歌翻译
通过将多个计算设备连接到分散的系统中,解决了数据岛问题的联邦学习,已成为隐私保存机学习的有希望的范式。本文研究了垂直联合学习(VFL),该学习(VFL)解决了协作组织共享同一组用户但不相交的功能的方案。当代VFL方法主要用于活动方和被动方的静态场景中,从一开始就拥有所有数据,不会改变。但是,现实生活中的数据经常动态地改变。为了减轻这个问题,我们提出了一种新的垂直联合学习方法,DVFL,通过知识蒸馏来适应动态数据分布改变。在DVFL中,大多数计算都在本地保持,以提高数据安全性和模型效率。我们广泛的实验结果表明,DVFL不仅可以在静态场景中获得接近现有VFL方法的结果,还可以适应动态方案中数据分布的变化。
translated by 谷歌翻译
我们开发了一个新颖的框架,将稀疏集团拉索的正规化者添加到深度学习中的自适应优化者家族中,例如动量,亚当,亚当,阿姆斯格拉德,阿德哈西亚人,并创建了新的优化者,这些优化者被称为群体动量,命名因此,Adagrad小组,亚当集团,Amsgrad集团和Adahessian集团等。我们基于原始偶的方法在随机凸设置中建立理论上证明的收敛保证。我们评估了新优化器对具有最先进的深度学习模型的三个大型现实广告单击数据集的正则效应。实验结果表明,与使用幅度修剪方法的后处理过程相比,模型的性能可以在相同的稀疏度水平上显着提高。此外,与没有幅度修剪的情况相比,我们的方法可以实现极高的稀疏性,并具有明显的更好或高度竞争性的性能。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
Digital engineering transformation is a crucial process for the engineering paradigm shifts in the fourth industrial revolution (4IR), and artificial intelligence (AI) is a critical enabling technology in digital engineering transformation. This article discusses the following research questions: What are the fundamental changes in the 4IR? More specifically, what are the fundamental changes in engineering? What is digital engineering? What are the main uncertainties there? What is trustworthy AI? Why is it important today? What are emerging engineering paradigm shifts in the 4IR? What is the relationship between the data-intensive paradigm and digital engineering transformation? What should we do for digitalization? From investigating the pattern of industrial revolutions, this article argues that ubiquitous machine intelligence (uMI) is the defining power brought by the 4IR. Digitalization is a condition to leverage ubiquitous machine intelligence. Digital engineering transformation towards Industry 4.0 has three essential building blocks: digitalization of engineering, leveraging ubiquitous machine intelligence, and building digital trust and security. The engineering design community at large is facing an excellent opportunity to bring the new capabilities of ubiquitous machine intelligence and trustworthy AI principles, as well as digital trust, together in various engineering systems design to ensure the trustworthiness of systems in Industry 4.0.
translated by 谷歌翻译