最近,基于深度学习的超分辨率方法取得了良好的性能,但主要关注通过喂养许多样品来训练单个广义的深网络。但是直观地,每个图像都具有其表示,并且预计将获得自适应模型。对于此问题,我们通过利用图像或特征的全局上下文信息来提出一种新颖的图像特异性卷积核调制(IKM),以产生适当地调制卷积核的注意重量,这越优于Vanilla卷积和几个现有的注意机制在没有任何其他参数的情况下嵌入最先进的架构。特别是,为了优化我们在迷你批量培训中的IKM,我们引入了一种特定于图像的优化(ISO)算法,比传统的迷你批量SGD优化更有效。此外,我们调查IKM对最先进的架构的影响,并利用一个带有U风格的残差学习和沙漏密集的块学习的新骨干,术语U-HOLGLASS密集网络(U-HDN),这是一个理论上和实验,最大限度地提高IKM的效力。单图像超分辨率的广泛实验表明,该方法实现了优异的现有方法性能。代码可在github.com/yuanfeihuang/ikm获得。
translated by 谷歌翻译
极度依赖于从划痕的模型的降级或优化的降解或优化的迭代估计,现有的盲超分辨率(SR)方法通常是耗时和效率较低,因为退化的估计从盲初始化进行并且缺乏可解释降解前沿。为了解决它,本文提出了一种使用端到端网络的盲SR的过渡学习方法,没有任何额外的推断中的额外迭代,并探讨了未知降级的有效表示。首先,我们分析并证明降解的过渡性作为可解释的先前信息,以间接推断出未知的降解模型,包括广泛使用的添加剂和卷曲降解。然后,我们提出了一种新颖的过渡性学习方法,用于盲目超分辨率(TLSR),通过自适应地推断过渡转换功能来解决未知的降级而没有推断的任何迭代操作。具体地,端到端TLSR网络包括一定程度的过渡性(点)估计网络,同一性特征提取网络和过渡学习模块。对盲人SR任务的定量和定性评估表明,拟议的TLSR实现了优异的性能,并且对最先进的盲人SR方法的复杂性较少。该代码可在github.com/yuanfeihuang/tlsr获得。
translated by 谷歌翻译
精神分裂症是一种慢性神经精神疾病,会引起大脑内部的不同结构改变。我们假设将深度学习应用于结构性神经影像学数据集可以检测到与疾病相关的改变,并提高分类和诊断准确性。我们使用单一可用的,常规的T1加权MRI扫描测试了这一假设,我们使用标准后处理方法从中提取了3D全脑结构。然后在三个开放数据集上开发,优化和评估了一个深度学习模型,并对精神分裂症患者进行T1加权MRI扫描。我们提出的模型优于基准模型,该模型还使用3D CNN体系结构对结构MR图像进行了训练。我们的模型几乎能够完美地(ROC曲线下的区域= 0.987),将精神分裂症患者与看不见的结构MRI扫描中的健康对照区分开。区域分析将皮质下区域和心室局部作为最预测的大脑区域。皮层结构在人类的认知,情感和社会功能中起关键作用,这些区域的结构异常与精神分裂症有关。我们的发现证实了精神分裂症与皮质下大脑结构的广泛改变有关,皮层结构信息在诊断分类中提供了突出的特征。总之,这些结果进一步证明了深度学习的潜力,以改善精神分裂症的诊断,并从单个标准的T1加权脑MRI中确定其结构性神经影像学特征。
translated by 谷歌翻译
背景:心肌灌注SPECT(MPS)对左心室(LV)功能的评估依赖于准确的心肌分割。本文的目的是开发和验证一种新的方法,该方法将深度学习与形状先验结合在一起,以精确提取LV心肌以自动测量LV功能参数。方法:开发了与形状变形模块集成三维(3D)V-NET的分割体系结构。使用动态编程(DP)算法生成的形状先验,然后在模型训练期间限制并指导模型输出,以快速收敛和改善性能。分层的5倍交叉验证用于训练和验证我们的模型。结果:我们提出的方法的结果与地面真理的结果一致。我们提出的模型的骰子相似性系数(DSC)为0.9573(0.0244),0.9821(0.0137)和0.9903(0.0041),Hausdorff距离(HD)6.7529(2.7334)(2.7334)mm,7.2507(3.2507(3.1952)MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM MM,和7.6122 3.0134)MM分别提取心内膜,心肌和心外膜。结论:我们提出的方法在提取LV心肌轮廓和评估LV功能方面具有很高的精度。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译