Massive collection and explosive growth of the huge amount of medical data, demands effective compression for efficient storage, transmission and sharing. Readily available visual data compression techniques have been studied extensively but tailored for nature images/videos, and thus show limited performance on medical data which are of different characteristics. Emerging implicit neural representation (INR) is gaining momentum and demonstrates high promise for fitting diverse visual data in target-data-specific manner, but a general compression scheme covering diverse medical data is so far absent. To address this issue, we firstly derive a mathematical explanation for INR's spectrum concentration property and an analytical insight on the design of compression-oriented INR architecture. Further, we design a funnel shaped neural network capable of covering broad spectrum of complex medical data and achieving high compression ratio. Based on this design, we conduct compression via optimization under given budget and propose an adaptive compression approach SCI, which adaptively partitions the target data into blocks matching the concentrated spectrum envelop of the adopted INR, and allocates parameter with high representation accuracy under given compression ratio. The experiments show SCI's superior performance over conventional techniques and wide applicability across diverse medical data.
translated by 谷歌翻译
增加片上光子神经网络(PNN)的层数对于改善其模型性能至关重要。但是,网络隐藏层的连续级联导致更大的集成光子芯片区域。为了解决此问题,我们提出了光学神经常规微分方程(ON-ON-ON-OD-ON-OD-ON-OD-ON-OD-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ODINE),该架构用光ODE求解器参数化了隐藏层的连续动力学。 On-Ode包括PNN,然后是光子积分器和光反馈回路,可以配置为代表残留的神经网络(RESNET)和复发性神经网络,并有效地降低了芯片面积占用率。对于基于干扰的光电非线性隐藏层,数值实验表明,单个隐藏层ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ON-ONE表示与图像分类任务中的两层光学重新系统大致相同。此外,Onode提高了基于衍射的全光线性隐藏层的模型分类精度。 On-Eod的时间依赖性动力学属性进一步应用于高精度的轨迹预测。
translated by 谷歌翻译
在弱光环境下,手持式摄影在长时间的曝光设置下遭受了严重的相机震动。尽管现有的Deblurry算法在暴露良好的模糊图像上表现出了令人鼓舞的性能,但它们仍然无法应对低光快照。在实用的低光脱毛中,复杂的噪声和饱和区是两个主导挑战。在这项工作中,我们提出了一种称为图像的新型非盲脱毛方法,并具有特征空间Wiener Deonervolution网络(Infwide),以系统地解决这些问题。在算法设计方面,Infwide提出了一个两分支的架构,该体系结构明确消除了噪声并幻觉,使图像空间中的饱和区域抑制了特征空间中的响起文物,并将两个互补输出与一个微妙的多尺度融合网络集成在一起高质量的夜间照片浮雕。为了进行有效的网络培训,我们设计了一组损失功能,集成了前向成像模型和向后重建,以形成近环的正则化,以确保深神经网络的良好收敛性。此外,为了优化Infwide在实际弱光条件下的适用性,采用基于物理过程的低光噪声模型来合成现实的嘈杂夜间照片进行模型训练。利用传统的Wiener Deonervolution算法的身体驱动的特征并引起了深层神经网络的表示能力,Infwide可以恢复细节,同时抑制在脱毛期间的不愉快的人工制品。关于合成数据和实际数据的广泛实验证明了所提出的方法的出色性能。
translated by 谷歌翻译
通过动态散射介质进行非侵入性光学成像具有许多重要的生物医学应用,但仍然是一项艰巨的任务。尽管标准弥漫成像方法测量光吸收或荧光发射,但也良好的是,散射的相干光的时间相关性通过组织像光强度一样扩散。然而,迄今为止,很少有作品旨在通过实验测量和处理这种时间相关数据,以证明去相关动力学的深度组织视频重建。在这项工作中,我们利用单光子雪崩二极管(SPAD)阵列摄像机同时监视单photon水平的斑点波动的时间动力学,从12种不同的幻影组织通过定制的纤维束阵列传递的位置。然后,我们应用深度神经网络将所获得的单光子测量值转换为迅速去摩擦组织幻像下散射动力学的视频。我们证明了重建瞬态(0.1-0.4s)动态事件的图像的能力,该动态事件发生在非相关的组织幻影下,并以毫米级分辨率进行重构,并突出显示我们的模型如何灵活地扩展到埋藏的phantom船只内的流速。
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.
translated by 谷歌翻译
Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译
The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets. Unsupervised learning does not require labels and is more suitable for solving medical image analysis problems. However, most of the current unsupervised learning methods need to be applied to large datasets. To make unsupervised learning applicable to small datasets, we proposed Swin MAE, which is a masked autoencoder with Swin Transformer as its backbone. Even on a dataset of only a few thousand medical images and without using any pre-trained models, Swin MAE is still able to learn useful semantic features purely from images. It can equal or even slightly outperform the supervised model obtained by Swin Transformer trained on ImageNet in terms of the transfer learning results of downstream tasks. The code will be publicly available soon.
translated by 谷歌翻译