Domain adaptation aims to transfer the knowledge acquired by models trained on (data-rich) source domains to (low-resource) target domains, for which a popular method is invariant representation learning. While they have been studied extensively for classification and regression problems, how they apply to ranking problems, where the data and metrics have a list structure, is not well understood. Theoretically, we establish a domain adaptation generalization bound for ranking under listwise metrics such as MRR and NDCG. The bound suggests an adaptation method via learning list-level domain-invariant feature representations, whose benefits are empirically demonstrated by unsupervised domain adaptation experiments on real-world ranking tasks, including passage reranking. A key message is that for domain adaptation, the representations should be analyzed at the same level at which the metric is computed, as we show that learning invariant representations at the list level is most effective for adaptation on ranking problems.
translated by 谷歌翻译
Adder Neural Network (AdderNet) provides a new way for developing energy-efficient neural networks by replacing the expensive multiplications in convolution with cheaper additions (i.e.l1-norm). To achieve higher hardware efficiency, it is necessary to further study the low-bit quantization of AdderNet. Due to the limitation that the commutative law in multiplication does not hold in l1-norm, the well-established quantization methods on convolutional networks cannot be applied on AdderNets. Thus, the existing AdderNet quantization techniques propose to use only one shared scale to quantize both the weights and activations simultaneously. Admittedly, such an approach can keep the commutative law in the l1-norm quantization process, while the accuracy drop after low-bit quantization cannot be ignored. To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations. Specifically, the pre-trained full-precision weights in different kernels are clustered into different groups, then the intra-group sharing and inter-group independent scales can be adopted. To further compensate the accuracy drop caused by the distribution difference, we then develop a lossless range clamp scheme for weights and a simple yet effective outliers clamp strategy for activations. Thus, the functionality of full-precision weights and the representation ability of full-precision activations can be fully preserved. The effectiveness of the proposed quantization method for AdderNet is well verified on several benchmarks, e.g., our 4-bit post-training quantized adder ResNet-18 achieves an 66.5% top-1 accuracy on the ImageNet with comparable energy efficiency, which is about 8.5% higher than that of the previous AdderNet quantization methods.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This paper investigates a phenomenon where query-based object detectors mispredict at the last decoding stage while predicting correctly at an intermediate stage. We review the training process and attribute the overlooked phenomenon to two limitations: lack of training emphasis and cascading errors from decoding sequence. We design and present Selective Query Recollection (SQR), a simple and effective training strategy for query-based object detectors. It cumulatively collects intermediate queries as decoding stages go deeper and selectively forwards the queries to the downstream stages aside from the sequential structure. Such-wise, SQR places training emphasis on later stages and allows later stages to work with intermediate queries from earlier stages directly. SQR can be easily plugged into various query-based object detectors and significantly enhances their performance while leaving the inference pipeline unchanged. As a result, we apply SQR on Adamixer, DAB-DETR, and Deformable-DETR across various settings (backbone, number of queries, schedule) and consistently brings 1.4-2.8 AP improvement.
translated by 谷歌翻译
The combination of transformers and masked image modeling (MIM) pre-training framework has shown great potential in various vision tasks. However, the pre-training computational budget is too heavy and withholds the MIM from becoming a practical training paradigm. This paper presents FastMIM, a simple and generic framework for expediting masked image modeling with the following two steps: (i) pre-training vision backbones with low-resolution input images; and (ii) reconstructing Histograms of Oriented Gradients (HOG) feature instead of original RGB values of the input images. In addition, we propose FastMIM-P to progressively enlarge the input resolution during pre-training stage to further enhance the transfer results of models with high capacity. We point out that: (i) a wide range of input resolutions in pre-training phase can lead to similar performances in fine-tuning phase and downstream tasks such as detection and segmentation; (ii) the shallow layers of encoder are more important during pre-training and discarding last several layers can speed up the training stage with no harm to fine-tuning performance; (iii) the decoder should match the size of selected network; and (iv) HOG is more stable than RGB values when resolution transfers;. Equipped with FastMIM, all kinds of vision backbones can be pre-trained in an efficient way. For example, we can achieve 83.8%/84.1% top-1 accuracy on ImageNet-1K with ViT-B/Swin-B as backbones. Compared to previous relevant approaches, we can achieve comparable or better top-1 accuracy while accelerate the training procedure by $\sim$5$\times$. Code can be found in https://github.com/ggjy/FastMIM.pytorch.
translated by 谷歌翻译
Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
通过利用和适应到目前为止获得的知识,人类具有识别和区分他们不熟悉的实例的天生能力。重要的是,他们实现了这一目标,而不会在早期学习中恶化表现。受此启发,我们识别并制定了NCDWF的新的,务实的问题设置:新颖的类发现而无需忘记,哪个任务是机器学习模型从未标记的数据中逐步发现实例的新颖类别,同时在先前看到的类别上保持其性能。我们提出1)一种生成伪内表示的方法,该表示的代理(不再可用)标记的数据,从而减轻遗忘的遗忘,2)基于相互信息的正常化程序,可以增强对新型类别的无聊发现,而3)a 3)当测试数据包含所见类别和看不见的类别的实例时,简单的已知类标识符可以有助于广义推断。我们介绍了基于CIFAR-10,CIFAR-100和IMAGENET-1000的实验协议,以衡量知识保留和新型类发现之间的权衡。我们广泛的评估表明,现有的模型在确定新类别的同时灾难性地忘记了先前看到的类别,而我们的方法能够有效地在竞争目标之间平衡。我们希望我们的工作能够吸引对这个新确定的实用问题设定的进一步研究。
translated by 谷歌翻译
多年来,使用单点监督的对象检测受到了越来越多的关注。在本文中,我们将如此巨大的性能差距归因于产生高质量的提案袋的失败,这对于多个实例学习至关重要(MIL)。为了解决这个问题,我们引入了现成建议方法(OTSP)方法的轻量级替代方案,从而创建点对点网络(P2BNET),该网络可以通过在中生成建议袋来构建一个互平衡的提案袋一种锚点。通过充分研究准确的位置信息,P2BNET进一步构建了一个实例级袋,避免了多个物体的混合物。最后,以级联方式进行的粗到精细政策用于改善提案和地面真相(GT)之间的IOU。从这些策略中受益,P2BNET能够生产出高质量的实例级袋以进行对象检测。相对于MS可可数据集中的先前最佳PSOD方法,P2BNET将平均平均精度(AP)提高了50%以上。它还证明了弥合监督和边界盒监督检测器之间的性能差距的巨大潜力。该代码将在github.com/ucas-vg/p2bnet上发布。
translated by 谷歌翻译
网络架构在基于深度学习的计算机视觉系统中起关键作用。广泛使用的卷积神经网络和变压器将图像视为网格或序列结构,该网格或序列结构并非灵活以捕获不规则和复杂的对象。在本文中,我们建议将图像表示为图形结构,并引入新的视觉GNN(VIG)体系结构,以提取视觉任务的图形级特征。我们首先将图像拆分为许多被视为节点的补丁,然后通过连接最近的邻居来构造图形。根据图像的图表表示,我们构建了VIG模型以在所有节点之间转换和交换信息。 VIG由两个基本模块组成:用于汇总和更新图形信息的图形卷积的图形模块,以及带有两个线性层的FFN模块用于节点特征转换。 VIG的各向同性和金字塔体系结构均具有不同的型号。关于图像识别和对象检测任务的广泛实验证明了我们的VIG架构的优势。我们希望GNN关于一般视觉任务的开创性研究将为未来的研究提供有用的灵感和经验。 pytorch代码可在https://github.com/huawei-noah/effficity-ai-backbones上获得,Mindspore代码可在https://gitee.com/mindspore/models上获得。
translated by 谷歌翻译
自主驾驶的当代深度学习对象检测方法通常会假定前缀类别的共同交通参与者,例如行人和汽车。大多数现有的探测器无法检测到罕见的物体和拐角案例(例如,越过街道的狗),这可能会导致某些情况下发生严重的事故,从而使真实世界应用可靠的自动驾驶不确定。阻碍了真正可靠的自动驾驶系统发展的主要原因是缺乏评估对象探测器在角案例上的性能的公共数据集。因此,我们介绍了一个名为CODA的具有挑战性的数据集,该数据集揭示了基于视力的检测器的关键问题。该数据集由1500个精心选择的现实世界驾驶场景组成,每个场景平均包含四个对象级角案例(平均),涵盖30多个对象类别。在CODA上,在大型自动驾驶数据集中训练的标准对象探测器的性能显着下降到3月的12.8%。此外,我们试验了最新的开放世界对象检测器,发现它也无法可靠地识别尾声中的新对象,这表明对自主驾驶的强大感知系统可能远离触及。我们希望我们的CODA数据集有助于对现实世界自动驾驶的可靠检测进行进一步的研究。我们的数据集将在https://coda-dataset.github.io上发布。
translated by 谷歌翻译