The current popular two-stream, two-stage tracking framework extracts the template and the search region features separately and then performs relation modeling, thus the extracted features lack the awareness of the target and have limited target-background discriminability. To tackle the above issue, we propose a novel one-stream tracking (OSTrack) framework that unifies feature learning and relation modeling by bridging the template-search image pairs with bidirectional information flows. In this way, discriminative target-oriented features can be dynamically extracted by mutual guidance. Since no extra heavy relation modeling module is needed and the implementation is highly parallelized, the proposed tracker runs at a fast speed. To further improve the inference efficiency, an in-network candidate early elimination module is proposed based on the strong similarity prior calculated in the one-stream framework. As a unified framework, OSTrack achieves state-of-the-art performance on multiple benchmarks, in particular, it shows impressive results on the one-shot tracking benchmark GOT-10k, i.e., achieving 73.7% AO, improving the existing best result (SwinTrack) by 4.3\%. Besides, our method maintains a good performance-speed trade-off and shows faster convergence. The code and models are available at https://github.com/botaoye/OSTrack.
translated by 谷歌翻译
教学机器根据少数训练样本认识到一个新的类别,特别是由于缺乏数据缺乏的新型类别的难题了解,只有一个仍然挑战。然而,人类可以快速学习新课程,甚至在人类可以讲述基于视觉和语义先前知识的关于每个类别的歧视特征时,甚至给出了一些样本。为了更好地利用这些先验知识,我们提出了语义引导的注意力(SEGA)机制,其中语义知识用于以自上而下的方式引导视觉感知,在区分类别时应注意哪些视觉特征。结果,即使少量样品也可以更具判别嵌入新类。具体地,借助从基类传输可视化的先验知识,接受了一个特征提取器,以培训以将每个小组类的数量的每个小组的图像嵌入到视觉原型中。然后,我们学习一个网络将语义知识映射到特定于类别的注意力矢量,该向量将用于执行功能选择以增强视觉原型。在Miniimagenet,Tieredimagenet,CiFar-FS和Cub上进行了广泛的实验表明,我们的语义引导的注意力实现了预期的功能和优于最先进的结果。
translated by 谷歌翻译
我们介绍了一个高分辨率变压器(HRFormer),其学习了密集预测任务的高分辨率表示,与产生低分辨率表示的原始视觉变压器,具有高存储器和计算成本。我们利用在高分辨率卷积网络(HRNET)中引入的多分辨率并行设计,以及本地窗口自我关注,用于通过小型非重叠图像窗口进行自我关注,以提高存储器和计算效率。此外,我们将卷积介绍到FFN中以在断开连接的图像窗口中交换信息。我们展示了高分辨率变压器对人类姿态估计和语义分割任务的有效性,例如,HRFormer在Coco姿势估算中以$ 50 \%$ 50 + 50美元和30 \%$更少的拖鞋。代码可用:https://github.com/hrnet/hRFormer。
translated by 谷歌翻译
Image-level weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years. Most of advanced solutions exploit class activation map (CAM). However, CAMs can hardly serve as the object mask due to the gap between full and weak supervisions. In this paper, we propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap. Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation, whose pixel-level labels take the same spatial transformation as the input images during data augmentation. However, this constraint is lost on the CAMs trained by image-level supervision. Therefore, we propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning. Moreover, we propose a pixel correlation module (PCM), which exploits context appearance information and refines the prediction of current pixel by its similar neighbors, leading to further improvement on CAMs consistency. Extensive experiments on PASCAL VOC 2012 dataset demonstrate our method outperforms state-of-the-art methods using the same level of supervision. The code is released online 1 .
translated by 谷歌翻译
Few-shot classification aims to recognize unlabeled samples from unseen classes given only few labeled samples. The unseen classes and low-data problem make few-shot classification very challenging. Many existing approaches extracted features from labeled and unlabeled samples independently, as a result, the features are not discriminative enough. In this work, we propose a novel Cross Attention Network to address the challenging problems in few-shot classification. Firstly, Cross Attention Module is introduced to deal with the problem of unseen classes. The module generates cross attention maps for each pair of class feature and query sample feature so as to highlight the target object regions, making the extracted feature more discriminative. Secondly, a transductive inference algorithm is proposed to alleviate the low-data problem, which iteratively utilizes the unlabeled query set to augment the support set, thereby making the class features more representative. Extensive experiments on two benchmarks show our method is a simple, effective and computationally efficient framework and outperforms the state-of-the-arts.
translated by 谷歌翻译
旋转不变的面部检测,即用任意旋转平面(RIP)角度的检测面,广泛需要在无约束的应用中被广泛地需要,但由于面部出现的较大变化,仍然仍然是一个具有挑战性的任务。大多数现有方法符合速度或准确性以处理大的撕裂变体。为了更有效地解决这个问题,我们提出了逐步校准网络(PCN)以粗略的方式执行旋转不变的面部检测。 PCN由三个阶段组成,每个阶段不仅将面与非面孔区分开,而且还校准了每个面部候选者的RIP方向逐渐直立。通过将校准过程划分为几个渐进步骤,并且仅预测早期阶段中的粗定向,PCN可以实现精确且快速校准。通过对脸部与逐渐减小的RIP范围进行二进制分类,PCN可以准确地检测满360 ^ {\ rIC} $ RIP角度的面部。这种设计导致实时旋转不变面检测器。在野外的多面向FDDB的实验和疯狂旋转面的较宽面的具有挑战性的子集表明我们的PCN实现了非常有希望的性能。
translated by 谷歌翻译
Facial attribute editing aims to manipulate single or multiple attributes of a face image, i.e., to generate a new face with desired attributes while preserving other details. Recently, generative adversarial net (GAN) and encoder-decoder architecture are usually incorporated to handle this task with promising results. Based on the encoder-decoder architecture, facial attribute editing is achieved by decoding the latent representation of the given face conditioned on the desired attributes. Some existing methods attempt to establish an attributeindependent latent representation for further attribute editing. However, such attribute-independent constraint on the latent representation is excessive because it restricts the capacity of the latent representation and may result in information loss, leading to over-smooth and distorted generation. Instead of imposing constraints on the latent representation, in this work we apply an attribute classification constraint to the generated image to just guarantee the correct change of desired attributes, i.e., to "change what you want". Meanwhile, the reconstruction learning is introduced to preserve attribute-excluding details, in other words, to "only change what you want". Besides, the adversarial learning is employed for visually realistic editing. These three components cooperate with each other forming an effective framework for high quality facial attribute editing, referred as AttGAN. Furthermore, our method is also directly applicable for attribute intensity control and can be naturally extended for attribute style manipulation. Experiments on CelebA dataset show that our method outperforms the state-of-the-arts on realistic attribute editing with facial details well preserved.
translated by 谷歌翻译
Diffractive optical networks provide rich opportunities for visual computing tasks since the spatial information of a scene can be directly accessed by a diffractive processor without requiring any digital pre-processing steps. Here we present data class-specific transformations all-optically performed between the input and output fields-of-view (FOVs) of a diffractive network. The visual information of the objects is encoded into the amplitude (A), phase (P), or intensity (I) of the optical field at the input, which is all-optically processed by a data class-specific diffractive network. At the output, an image sensor-array directly measures the transformed patterns, all-optically encrypted using the transformation matrices pre-assigned to different data classes, i.e., a separate matrix for each data class. The original input images can be recovered by applying the correct decryption key (the inverse transformation) corresponding to the matching data class, while applying any other key will lead to loss of information. The class-specificity of these all-optical diffractive transformations creates opportunities where different keys can be distributed to different users; each user can only decode the acquired images of only one data class, serving multiple users in an all-optically encrypted manner. We numerically demonstrated all-optical class-specific transformations covering A-->A, I-->I, and P-->I transformations using various image datasets. We also experimentally validated the feasibility of this framework by fabricating a class-specific I-->I transformation diffractive network using two-photon polymerization and successfully tested it at 1550 nm wavelength. Data class-specific all-optical transformations provide a fast and energy-efficient method for image and data encryption, enhancing data security and privacy.
translated by 谷歌翻译
A matrix free and a low rank approximation preconditioner are proposed to accelerate the convergence of stochastic gradient descent (SGD) by exploiting curvature information sampled from Hessian-vector products or finite differences of parameters and gradients similar to the BFGS algorithm. Both preconditioners are fitted with an online updating manner minimizing a criterion that is free of line search and robust to stochastic gradient noise, and further constrained to be on certain connected Lie groups to preserve their corresponding symmetry or invariance, e.g., orientation of coordinates by the connected general linear group with positive determinants. The Lie group's equivariance property facilitates preconditioner fitting, and its invariance property saves any need of damping, which is common in second-order optimizers, but difficult to tune. The learning rate for parameter updating and step size for preconditioner fitting are naturally normalized, and their default values work well in most situations.
translated by 谷歌翻译
病理诊断依赖于组织学染色的薄组织样品的目视检查,其中使用不同类型的污渍来对比并突出各种所需的组织学特征。但是,破坏性的组织化学染色程序通常是不可逆的,因此很难在同一组织段上获得多个污渍。在这里,我们通过层叠的深神经网络(C-DNN)演示了虚拟的染色转移框架,以数字化将苏木精和曙红(H&E)染色的组织图像转化为其他类型的组织学染色。与单个神经网络结构不同,该结构仅将一种染色类型作为一种输入来输出另一种染色类型的数字输出图像,C-DNN首先使用虚拟染色将自动荧光显微镜图像转换为H&E,然后执行从H&E到另一个域的染色转换以级联的方式染色。在训练阶段的这种级联结构使该模型可以直接利用H&E和目标特殊污渍的组织化学染色图像数据。该优势减轻了配对数据获取的挑战,并提高了从H&E到另一个污渍的虚拟污渍转移的图像质量和色彩准确性。我们使用肾针芯活检组织切片验证了这种C-DNN方法的出色性能,并将H&E染色的组织图像成功地转移到虚拟PAS(周期性酸 - 雪)染色中。该方法使用现有的,组织化学染色的幻灯片提供了特殊污渍的高质量虚拟图像,并通过执行高度准确的污渍转换来创造数字病理学的新机会。
translated by 谷歌翻译