The efficient segmentation of foreground text information from the background in degraded color document images is a hot research topic. Due to the imperfect preservation of ancient documents over a long period of time, various types of degradation, including staining, yellowing, and ink seepage, have seriously affected the results of image binarization. In this paper, a three-stage method is proposed for image enhancement and binarization of degraded color document images by using discrete wavelet transform (DWT) and generative adversarial network (GAN). In Stage-1, we use DWT and retain the LL subband images to achieve the image enhancement. In Stage-2, the original input image is split into four (Red, Green, Blue and Gray) single-channel images, each of which trains the independent adversarial networks. The trained adversarial network models are used to extract the color foreground information from the images. In Stage-3, in order to combine global and local features, the output image from Stage-2 and the original input image are used to train the independent adversarial networks for document binarization. The experimental results demonstrate that our proposed method outperforms many classical and state-of-the-art (SOTA) methods on the Document Image Binarization Contest (DIBCO) dataset. We release our implementation code at https://github.com/abcpp12383/ThreeStageBinarization.
translated by 谷歌翻译
卷积神经网络(CNN)通过堆叠卷积层增加深度,而更深的网络模型在图像识别方面的表现更好。实证研究表明,简单地堆叠卷积层不会使网络训练更好,而跳过连接(残留学习)可以改善网络模型性能。对于图像分类任务,具有全球密集连接体系结构的模型在ImageNet等大型数据集中表现良好,但不适用于CIFAR-10和SVHN等小型数据集。与密集的连接不同,我们提出了两种连接层的新算法。基线是一个密集的连接网络,由两个新算法连接的网络分别命名为ShortNet1和ShortNet2。CIFAR-10和SVHN上图像分类的实验结果表明,ShortNet1的测试错误率低5%,推理时间比基线快25%。ShortNet2将推理时间加速40%,测试准确性损失较小。
translated by 谷歌翻译
通过在计算机视觉(CV)领域深度学习算法的良好性能,卷积神经网络(CNN)体系结构已成为计算机视觉任务的主要骨干。随着移动设备的广泛使用,基于计算能力低的平台的神经网络模型逐渐引起人们的注意。但是,由于计算能力的限制,移动设备上通常无法使用深度学习算法。本文提出了一个轻巧的卷积神经网络TripLenet,可以在Raspberry Pi上轻松运行。从阈值中的块连接概念中采用,新提出的网络模型会压缩并加速网络模型,减少网络的参数量,并在确保准确性的同时缩短每个图像的推理时间。我们提出的TripLenet和其他最先进的(SOTA)神经网络在Raspberry Pi上使用CIFAR-10和SVHN数据集进行了图像分类实验。实验结果表明,与GhostNet,Mobilenet,Theashnet,EdefityNet和HardNet相比,每图像的推理时间分别缩短了15%,16%,17%,24%和30%。
translated by 谷歌翻译
随着计算机愿景任务中的神经网络的不断发展,越来越多的网络架构取得了突出的成功。作为最先进的神经网络架构之一,DenSenet捷径所有特征映射都可以解决模型深度的问题。虽然这种网络架构在低MAC(乘法和累积)上具有优异的准确性,但它需要过度推理时间。为了解决这个问题,HardNet减少了特征映射之间的连接,使得其余连接类似于谐波。然而,这种压缩方法可能导致模型精度和增加的MAC和模型大小降低。该网络架构仅减少了内存访问时间,需要改进其整体性能。因此,我们提出了一种新的网络架构,使用阈值机制来进一步优化连接方法。丢弃不同卷积层的不同数量的连接以压缩阈值中的特征映射。所提出的网络架构使用了三个数据集,CiFar-10,CiFar-100和SVHN,以评估图像分类的性能。实验结果表明,与DENSENET相比,阈值可降低推理时间高达60%,并且在这些数据集上的硬盘相比,训练速度快高达35%的训练速度和20%的误差率降低。
translated by 谷歌翻译
深度神经网络在计算机视野领域取得了重大进展。最近的研究表明,神经网络架构的深度,宽度和快捷方式连接在其性能中起着至关重要的作用。最先进的神经网络架构DenSenet之一,通过密集连接实现了优异的收敛速率。但是,它仍然具有明显的缺点在内存量的使用情况。在本文中,我们介绍了一种新型的修剪工具,阈值,这是指MOSFET中阈值电压的原理。这项工作采用此方法以不同的方式连接不同深度的块以减少内存的使用情况。它表示为阈值。我们在CiFar10的数据集上评估阈值和其他不同网络。实验表明,HardNet是DenSenet的两倍,在此基础上,阈值比HardNet更快10%,误差率降低10%。
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Weakly-supervised temporal action localization (WTAL) learns to detect and classify action instances with only category labels. Most methods widely adopt the off-the-shelf Classification-Based Pre-training (CBP) to generate video features for action localization. However, the different optimization objectives between classification and localization, make temporally localized results suffer from the serious incomplete issue. To tackle this issue without additional annotations, this paper considers to distill free action knowledge from Vision-Language Pre-training (VLP), since we surprisingly observe that the localization results of vanilla VLP have an over-complete issue, which is just complementary to the CBP results. To fuse such complementarity, we propose a novel distillation-collaboration framework with two branches acting as CBP and VLP respectively. The framework is optimized through a dual-branch alternate training strategy. Specifically, during the B step, we distill the confident background pseudo-labels from the CBP branch; while during the F step, the confident foreground pseudo-labels are distilled from the VLP branch. And as a result, the dual-branch complementarity is effectively fused to promote a strong alliance. Extensive experiments and ablation studies on THUMOS14 and ActivityNet1.2 reveal that our method significantly outperforms state-of-the-art methods.
translated by 谷歌翻译
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues, i.e., modeling the relationship between surface orientation and intensity at each pixel. Photometric stereo prevails in superior per-pixel resolution and fine reconstruction details. However, it is a complicated problem because of the non-linear relationship caused by non-Lambertian surface reflectance. Recently, various deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces. This paper provides a comprehensive review of existing deep learning-based calibrated photometric stereo methods. We first analyze these methods from different perspectives, including input processing, supervision, and network architecture. We summarize the performance of deep learning photometric stereo models on the most widely-used benchmark data set. This demonstrates the advanced performance of deep learning-based photometric stereo methods. Finally, we give suggestions and propose future research trends based on the limitations of existing models.
translated by 谷歌翻译
With the success of Vision Transformers (ViTs) in computer vision tasks, recent arts try to optimize the performance and complexity of ViTs to enable efficient deployment on mobile devices. Multiple approaches are proposed to accelerate attention mechanism, improve inefficient designs, or incorporate mobile-friendly lightweight convolutions to form hybrid architectures. However, ViT and its variants still have higher latency or considerably more parameters than lightweight CNNs, even true for the years-old MobileNet. In practice, latency and size are both crucial for efficient deployment on resource-constraint hardware. In this work, we investigate a central question, can transformer models run as fast as MobileNet and maintain a similar size? We revisit the design choices of ViTs and propose an improved supernet with low latency and high parameter efficiency. We further introduce a fine-grained joint search strategy that can find efficient architectures by optimizing latency and number of parameters simultaneously. The proposed models, EfficientFormerV2, achieve about $4\%$ higher top-1 accuracy than MobileNetV2 and MobileNetV2$\times1.4$ on ImageNet-1K with similar latency and parameters. We demonstrate that properly designed and optimized vision transformers can achieve high performance with MobileNet-level size and speed.
translated by 谷歌翻译
Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hardware, such as mobile devices. Many works have been conducted to reduce the latency of running NeRF models. However, most of them still require high-end GPU for acceleration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving $15\times \sim 24\times$ storage compared with MobileNeRF. Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs. $25.91$ on the real-world forward-facing dataset).
translated by 谷歌翻译