Deep learning techniques have made considerable progress in image inpainting, restoration, and reconstruction in the last few years. Image outpainting, also known as image extrapolation, lacks attention and practical approaches to be fulfilled, owing to difficulties caused by large-scale area loss and less legitimate neighboring information. These difficulties have made outpainted images handled by most of the existing models unrealistic to human eyes and spatially inconsistent. When upsampling through deconvolution to generate fake content, the naive generation methods may lead to results lacking high-frequency details and structural authenticity. Therefore, as our novelties to handle image outpainting problems, we introduce structural prior as a condition to optimize the generation quality and a new semantic embedding term to enhance perceptual sanity. we propose a deep learning method based on Generative Adversarial Network (GAN) and condition edges as structural prior in order to assist the generation. We use a multi-phase adversarial training scheme that comprises edge inference training, contents inpainting training, and joint training. The newly added semantic embedding loss is proved effective in practice.
translated by 谷歌翻译
Recent years have witnessed the rapid progress of image captioning. However, the demands for large memory storage and heavy computational burden prevent these captioning models from being deployed on mobile devices. The main obstacles lie in the heavyweight visual feature extractors (i.e., object detectors) and complicated cross-modal fusion networks. To this end, we propose LightCap, a lightweight image captioner for resource-limited devices. The core design is built on the recent CLIP model for efficient image captioning. To be specific, on the one hand, we leverage the CLIP model to extract the compact grid features without relying on the time-consuming object detectors. On the other hand, we transfer the image-text retrieval design of CLIP to image captioning scenarios by devising a novel visual concept extractor and a cross-modal modulator. We further optimize the cross-modal fusion model and parallel prediction heads via sequential and ensemble distillations. With the carefully designed architecture, our model merely contains 40M parameters, saving the model size by more than 75% and the FLOPs by more than 98% in comparison with the current state-of-the-art methods. In spite of the low capacity, our model still exhibits state-of-the-art performance on prevalent datasets, e.g., 136.6 CIDEr on COCO Karpathy test split. Testing on the smartphone with only a single CPU, the proposed LightCap exhibits a fast inference speed of 188ms per image, which is ready for practical applications.
translated by 谷歌翻译
Graph neural networks (GNNs) are popular weapons for modeling relational data. Existing GNNs are not specified for attribute-incomplete graphs, making missing attribute imputation a burning issue. Until recently, many works notice that GNNs are coupled with spectral concentration, which means the spectrum obtained by GNNs concentrates on a local part in spectral domain, e.g., low-frequency due to oversmoothing issue. As a consequence, GNNs may be seriously flawed for reconstructing graph attributes as graph spectral concentration tends to cause a low imputation precision. In this work, we present a regularized graph autoencoder for graph attribute imputation, named MEGAE, which aims at mitigating spectral concentration problem by maximizing the graph spectral entropy. Notably, we first present the method for estimating graph spectral entropy without the eigen-decomposition of Laplacian matrix and provide the theoretical upper error bound. A maximum entropy regularization then acts in the latent space, which directly increases the graph spectral entropy. Extensive experiments show that MEGAE outperforms all the other state-of-the-art imputation methods on a variety of benchmark datasets.
translated by 谷歌翻译
Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
准确,快速的双核细胞(BC)检测在预测白血病和其他恶性肿瘤的风险中起着重要作用。但是,手动显微镜计数是耗时的,缺乏客观性。此外,由于bc显微镜整体幻灯片图像(WSIS)的染色质量和多样性的限制,传统的图像处理方法是无助的。为了克服这一挑战,我们提出了一种基于深度学习的结构启发的两阶段检测方法,该方法是基于深度学习的,该方法是在斑块级别的WSI-Level和细粒度分类处实施BCS粗略检测的级联。粗糙检测网络是基于用于细胞检测的圆形边界框的多任务检测框架,以及用于核检测的中心关键点。圆的表示降低了自由度,与通常的矩形盒子相比,减轻周围杂质的影响,并且在WSI中可能是旋转不变的。检测细胞核中的关键点可以帮助网络感知,并在后来的细粒分类中用于无监督的颜色层分割。精细的分类网络由基于颜色层掩模的监督和基于变压器的关键区域选择模块组成的背景区域抑制模块,其全局建模能力。此外,首先提出了无监督和未配对的细胞质发生器网络来扩展长尾分配数据集。最后,在BC多中心数据集上进行实验。拟议的BC罚款检测方法在几乎所有评估标准中都优于其他基准,从而为诸如癌症筛查等任务提供了澄清和支持。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
扩展方法探讨了深度学习方法中输入长度中性能瓶颈的可能性。在这项工作中,我们介绍了块静态扩展,该块静态扩展分布和处理输入,以与输入相比,以不同长度为特征的异质和任意大的序列集合。从这种方法中,我们引入了一种名为AspectionNet V2的新模型,该模型使用我们的新培训策略进行了培训,该模型不仅具有有效性,而且与最近的图像字幕中的标准方法相比,它的效率不仅快6倍。我们的新模型在MS-Coco 2014字幕挑战上实现了最先进的表现,在离线测试拆分中得分为143.7 Cider-D,在线评估服务器中的140.8 Cider-D和NoCaps验证集中的72.9 All-Cider。源代码可用:https://github.com/jchenghu/expansionnet_v2
translated by 谷歌翻译
最近,未经训练的神经网络(UNNS)显示了在随机采样轨迹上对MR图像重建的令人满意的性能,而无需使用其他全面采样训练数据。但是,现有的基于UNN的方法并未完全使用MR图像物理先验,导致某些常见情况(例如部分傅立叶,常规采样等)的性能差,并且缺乏重建准确性的理论保证。为了弥合这一差距,我们使用特殊设计的UNN提出了一种保障的K空间插值方法,该方法使用特殊设计的UNN,该方法由MR图像的三个物理先验(或K空间数据)驱动,包括稀疏,线圈灵敏度平稳性和相位平滑度。我们还证明,所提出的方法保证了插值K空间数据准确性的紧密界限。最后,消融实验表明,所提出的方法比现有传统方法更准确地表征了MR图像的物理先验。此外,在一系列常用的采样轨迹下,实验还表明,所提出的方法始终优于传统的平行成像方法和现有的UNN,甚至超过了最先进的监督训练的K空间深度学习方法案例。
translated by 谷歌翻译
最近,对深度学习进行了广泛的研究,以加速动态磁共振(MR)成像,并取得了令人鼓舞的进步。但是,如果没有完全采样的参考数据进行培训,当前方法可能在恢复细节或结构方面具有有限的能力。为了应对这一挑战,本文提出了一个自我监督的协作学习框架(SelfCollearn),以从无效的K-Space数据中进行准确的动态MR图像重建。拟议的框架配备了三个重要组成部分,即双网络协作学习,重新启动数据增强和专门设计的共同培训损失。该框架可以灵活地与数据驱动的网络和基于模型的迭代未滚动网络集成。我们的方法已在体内数据集上进行了评估,并将其与四种最新方法进行了比较。结果表明,我们的方法具有很强的能力,可以从无效的K空间数据捕获直接重建的基本和固有表示形式,因此可以实现高质量且快速的动态MR成像。
translated by 谷歌翻译