Learning-based infrared small object detection methods currently rely heavily on the classification backbone network. This tends to result in tiny object loss and feature distinguishability limitations as the network depth increases. Furthermore, small objects in infrared images are frequently emerged bright and dark, posing severe demands for obtaining precise object contrast information. For this reason, we in this paper propose a simple and effective ``U-Net in U-Net'' framework, UIU-Net for short, and detect small objects in infrared images. As the name suggests, UIU-Net embeds a tiny U-Net into a larger U-Net backbone, enabling the multi-level and multi-scale representation learning of objects. Moreover, UIU-Net can be trained from scratch, and the learned features can enhance global and local contrast information effectively. More specifically, the UIU-Net model is divided into two modules: the resolution-maintenance deep supervision (RM-DS) module and the interactive-cross attention (IC-A) module. RM-DS integrates Residual U-blocks into a deep supervision network to generate deep multi-scale resolution-maintenance features while learning global context information. Further, IC-A encodes the local context information between the low-level details and high-level semantic features. Extensive experiments conducted on two infrared single-frame image datasets, i.e., SIRST and Synthetic datasets, show the effectiveness and superiority of the proposed UIU-Net in comparison with several state-of-the-art infrared small object detection methods. The proposed UIU-Net also produces powerful generalization performance for video sequence infrared small object datasets, e.g., ATR ground/air video sequence dataset. The codes of this work are available openly at \url{https://github.com/danfenghong/IEEE_TIP_UIU-Net}.
translated by 谷歌翻译
在本文中,我们引入了一种新算法,该算法基于原型分析,用于假设末日成员的线性混合,用于盲目的高光谱脉冲。原型分析是该任务的自然表述。该方法不需要存在纯像素(即包含单个材料的像素),而是将末端成员表示为原始高光谱图像中几个像素的凸组合。我们的方法利用了熵梯度下降策略,(i)比传统的原型分析算法为高光谱脉冲提供更好的解决方案,并且(ii)导致有效的GPU实现。由于运行我们算法的单个实例很快,我们还提出了一个结合机制以及适当的模型选择程序,该过程使我们的方法可鲁棒性到超参数选择,同时保持计算复杂性合理。通过使用六个标准的真实数据集,我们表明我们的方法的表现优于最先进的矩阵分解和最新的深度学习方法。我们还提供开源pytorch实施:https://github.com/inria-thoth/edaa。
translated by 谷歌翻译
事实证明,深度学习是高光谱图像(HSI)分类的一种非常有效的方法。但是,深度神经网络需要大量注释的数据集来概括地概括。这限制了深度学习对HSI分类的适用性,在该分类中,为每个场景手动标记成千上万的像素是不切实际的。在本文中,我们建议利用自我监督学习(SSL)进行HSI分类。我们表明,通过使用Barlow-Twins(一种最先进的SSL算法)在未标记的像素上预先培训编码器,我们可以获得具有少数标签的准确模型。实验结果表明,这种方法明显优于香草的监督学习。
translated by 谷歌翻译
Pansharpening是指具有高空间分辨率的全色图像的融合和具有低空间分辨率的多光谱图像,旨在获得高空间分辨率多光谱图像。在本文中,我们提出了一种新的深度神经网络架构,通过考虑以下双型结构,\ emph {ie,double级,双分支和双向,称为三双网络(TDNet)。通过使用TDNet的结构,可以充分利用平面图像的空间细节,并利用逐步注入低空间分辨率多光谱图像,从而产生高空间分辨率输出。特定的网络设计是由传统多分辨率分析(MRA)方法的物理公式的动机。因此,有效的MRA融合模块也集成到TDNet中。此外,我们采用了一些Reset块和一些多尺度卷积内核来加深和扩大网络,以有效增强所提出的TDNet的特征提取和鲁棒性。关于WorldView-3,Quickbird和GaoFen-2传感器获得的减少和全分辨率数据集的广泛实验表明了与最近最近的最先进的泛红花彭化方法相比,所提出的TDNet的优越性。一个消融的研究也证实了所提出的方法的有效性。
translated by 谷歌翻译
多模式数据通过将来自来自各个域的数据与具有非常不同的统计特性的数据集成来提供自然现象的互补信息。捕获多模式数据的模态和跨换体信息是多模式学习方法的基本能力。几何感知数据分析方法通过基于其几何底层结构隐式表示各种方式的数据来提供这些能力。此外,在许多应用中,在固有的几何结构上明确地定义数据。对非欧几里德域的深度学习方法是一个新兴的研究领域,最近在许多研究中被调查。大多数流行方法都是为单峰数据开发的。本文提出了一种多模式多缩放图小波卷积网络(M-GWCN)作为端到端网络。 M-GWCN同时通过应用多尺度图小波变换来找到模态表示,以在每个模态的图形域中提供有用的本地化属性,以及通过学习各种方式之间的相关性的学习置换的跨模式表示。 M-GWCN不限于具有相同数量的数据的均匀模式,或任何指示模式之间的对应关系的现有知识。已经在三个流行的单峰显式图形数据集和五个多模式隐式界面进行了几个半监督节点分类实验。实验结果表明,与光谱图域卷积神经网络和最先进的多模式方法相比,所提出的方法的优越性和有效性。
translated by 谷歌翻译
高光谱成像为各种应用提供了新的视角,包括使用空降或卫星遥感,精密养殖,食品安全,行星勘探或天体物理学的环境监测。遗憾的是,信息的频谱分集以各种劣化来源的牺牲品,并且目前获取的缺乏准确的地面“清洁”高光谱信号使得恢复任务具有挑战性。特别是,与传统的RGB成像问题相比,培训深度神经网络用于恢复难以深入展现的传统RGB成像问题。在本文中,我们提倡基于稀疏编码原理的混合方法,其保留与手工图像前导者编码域知识的经典技术的可解释性,同时允许在没有大量数据的情况下训练模型参数。我们在各种去噪基准上展示了我们的方法是计算上高效并且显着优于现有技术。
translated by 谷歌翻译
高光谱(HS)图像的特征在于近似连续的频谱信息,通过捕获微妙的光谱差异来实现材料的精细识别。由于它们出色的局部上下文建模能力,已被证明是HS Image分类中的强大特征提取器的卷积神经网络(CNNS)。但是,由于其固有的网络骨干的限制,CNNS无法挖掘并表示频谱签名的序列属性。为了解决这个问题,我们从与变换器的顺序透视重新考虑HS图像分类,并提出一个名为\ ul {spectralformer}的新型骨干网。除了经典变压器中的带明智的表示之外,Spectralformer能够从HS图像的相邻频带中学习频谱局部序列信息,产生群体方向谱嵌入。更重要的是,为了减少在层面传播过程中丢失有价值信息的可能性,我们通过自适应地学习跨层熔断“软”残留物来传达横向跳过连接以传送从浅层到深层的存储器样组件。值得注意的是,所提出的光谱变压器是一个高度灵活的骨干网络,可以适用于像素和修补程序的输入。我们通过进行广泛的实验评估三个HS数据集上提出的光谱变压器的分类性能,显示了经典变压器的优越性,与最先进的骨干网络相比,实现了显着改进。这项工作的代码将在https://github.com/danfenghong/ieee_tgrs_spectralformer下获得,以便再现性。
translated by 谷歌翻译
Surrogate models are necessary to optimize meaningful quantities in physical dynamics as their recursive numerical resolutions are often prohibitively expensive. It is mainly the case for fluid dynamics and the resolution of Navier-Stokes equations. However, despite the fast-growing field of data-driven models for physical systems, reference datasets representing real-world phenomena are lacking. In this work, we develop AirfRANS, a dataset for studying the two-dimensional incompressible steady-state Reynolds-Averaged Navier-Stokes equations over airfoils at a subsonic regime and for different angles of attacks. We also introduce metrics on the stress forces at the surface of geometries and visualization of boundary layers to assess the capabilities of models to accurately predict the meaningful information of the problem. Finally, we propose deep learning baselines on four machine learning tasks to study AirfRANS under different constraints for generalization considerations: big and scarce data regime, Reynolds number, and angle of attack extrapolation.
translated by 谷歌翻译
We present sketched linear discriminant analysis, an iterative randomized approach to binary-class Gaussian model linear discriminant analysis (LDA) for very large data. We harness a least squares formulation and mobilize the stochastic gradient descent framework. Therefore, we obtain a randomized classifier with performance that is very comparable to that of full data LDA while requiring access to only one row of the training data at a time. We present convergence guarantees for the sketched predictions on new data within a fixed number of iterations. These guarantees account for both the Gaussian modeling assumptions on the data and algorithmic randomness from the sketching procedure. Finally, we demonstrate performance with varying step-sizes and numbers of iterations. Our numerical experiments demonstrate that sketched LDA can offer a very viable alternative to full data LDA when the data may be too large for full data analysis.
translated by 谷歌翻译
Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.
translated by 谷歌翻译