Partitioning an image into superpixels based on the similarity of pixels with respect to features such as colour or spatial location can significantly reduce data complexity and improve subsequent image processing tasks. Initial algorithms for unsupervised superpixel generation solely relied on local cues without prioritizing significant edges over arbitrary ones. On the other hand, more recent methods based on unsupervised deep learning either fail to properly address the trade-off between superpixel edge adherence and compactness or lack control over the generated number of superpixels. By using random images with strong spatial correlation as input, \ie, blurred noise images, in a non-convolutional image decoder we can reduce the expected number of contrasts and enforce smooth, connected edges in the reconstructed image. We generate edge-sparse pixel embeddings by encoding additional spatial information into the piece-wise smooth activation maps from the decoder's last hidden layer and use a standard clustering algorithm to extract high quality superpixels. Our proposed method reaches state-of-the-art performance on the BSDS500, PASCAL-Context and a microscopy dataset.
translated by 谷歌翻译
盲目解构是一种在各种田地中产生的不良问题,从显微镜到天文学。问题的不良性质需要足够的前沿到达理想的解决方案。最近,已经表明,深度学习架构可以用作在无监督盲卷积优化期间的图像生成,然而甚至在单个图像上也呈现性能波动。我们建议使用Wiener-Deconvolulation在优化期间通过从高斯开始使用辅助内核估计来指导图像发生器在优化期间。我们观察到与低频特征相比,通过延迟再现去卷积的高频伪影。另外,图像发生器从模糊图像的速度再现解码图像的低频特征。我们在约束的优化框架中嵌入计算过程,并表明该方法在多个数据集中产生更高的稳定性和性能。此外,我们提供代码。
translated by 谷歌翻译
最近的作品表明,卷积神经网络(CNN)架构具有朝向较低频率的光谱偏压,这已经针对在之前(DIP)框架中的深度图像中的各种图像恢复任务而被利用。归纳偏置的益处网络施加在DIP框架中取决于架构。因此,研究人员研究了如何自动化搜索来确定最佳性能的模型。然而,常见的神经结构搜索(NAS)技术是资源和时间密集的。此外,最佳性能的模型是针对整个图像的整个数据集而不是为每个图像独立地确定,这将是非常昂贵的。在这项工作中,我们首先表明DIP框架中的最佳神经结构是依赖于图像的。然后利用这种洞察力,我们提出了一种特定于DIP框架的图像特定的NAS策略,其需要比典型的NAS方法大得多,有效地实现特定于图像的NA。对于给定的图像,噪声被馈送到大量未训练的CNN,并且它们的输出的功率谱密度(PSD)与使用各种度量的损坏图像进行比较。基于此,选择并培训了一个小型的图像特定架构,以重建损坏的图像。在这种队列中,选择重建最接近重建图像的平均值的模型作为最终模型。我们向拟议的战略证明(1)证明其在NAS数据集上的表现效果,该数据集包括来自特定搜索空间(2)的500多种模型,在特定的搜索空间(2)上进行了广泛的图像去噪,染色和超级分辨率任务。我们的实验表明,图像特定度量可以将搜索空间减少到小型模型队列,其中最佳模型优于电流NAS用于图像恢复的方法。
translated by 谷歌翻译
Multimodal deep learning has been used to predict clinical endpoints and diagnoses from clinical routine data. However, these models suffer from scaling issues: they have to learn pairwise interactions between each piece of information in each data type, thereby escalating model complexity beyond manageable scales. This has so far precluded a widespread use of multimodal deep learning. Here, we present a new technical approach of "learnable synergies", in which the model only selects relevant interactions between data modalities and keeps an "internal memory" of relevant data. Our approach is easily scalable and naturally adapts to multimodal data inputs from clinical routine. We demonstrate this approach on three large multimodal datasets from radiology and ophthalmology and show that it outperforms state-of-the-art models in clinically relevant diagnosis tasks. Our new approach is transferable and will allow the application of multimodal deep learning to a broad set of clinically relevant problems.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
The success of Deep Learning applications critically depends on the quality and scale of the underlying training data. Generative adversarial networks (GANs) can generate arbitrary large datasets, but diversity and fidelity are limited, which has recently been addressed by denoising diffusion probabilistic models (DDPMs) whose superiority has been demonstrated on natural images. In this study, we propose Medfusion, a conditional latent DDPM for medical images. We compare our DDPM-based model against GAN-based models, which constitute the current state-of-the-art in the medical domain. Medfusion was trained and compared with (i) StyleGan-3 on n=101,442 images from the AIROGS challenge dataset to generate fundoscopies with and without glaucoma, (ii) ProGAN on n=191,027 from the CheXpert dataset to generate radiographs with and without cardiomegaly and (iii) wGAN on n=19,557 images from the CRCMS dataset to generate histopathological images with and without microsatellite stability. In the AIROGS, CRMCS, and CheXpert datasets, Medfusion achieved lower (=better) FID than the GANs (11.63 versus 20.43, 30.03 versus 49.26, and 17.28 versus 84.31). Also, fidelity (precision) and diversity (recall) were higher (=better) for Medfusion in all three datasets. Our study shows that DDPM are a superior alternative to GANs for image synthesis in the medical domain.
translated by 谷歌翻译
Recent advances in computer vision have shown promising results in image generation. Diffusion probabilistic models in particular have generated realistic images from textual input, as demonstrated by DALL-E 2, Imagen and Stable Diffusion. However, their use in medicine, where image data typically comprises three-dimensional volumes, has not been systematically evaluated. Synthetic images may play a crucial role in privacy preserving artificial intelligence and can also be used to augment small datasets. Here we show that diffusion probabilistic models can synthesize high quality medical imaging data, which we show for Magnetic Resonance Images (MRI) and Computed Tomography (CT) images. We provide quantitative measurements of their performance through a reader study with two medical experts who rated the quality of the synthesized images in three categories: Realistic image appearance, anatomical correctness and consistency between slices. Furthermore, we demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (dice score 0.91 vs. 0.95 without vs. with synthetic data).
translated by 谷歌翻译
自动预测主观听力测试的结果是一项具有挑战性的任务。即使听众之间的偏好是一致的,评分也可能因人而异。虽然先前的工作重点是预测单个刺激的听众评分(平均意见分数),但我们专注于预测主观偏好的更简单任务,即给出了两个语音刺激的同一文本。我们提出了一个基于抗对称双神经网络的模型,该模型是在波形对及其相应偏好分数上训练的。我们探索了注意力和复发性神经网,以说明一对刺激不符合时间的事实。为了获得大型训练集,我们将听众的评分从Mushra测试转换为反映这对中一种刺激的频率高于另一个刺激的值。具体而言,我们评估了从五年内进行的十二个Mushra评估获得的数据,其中包含不同扬声器数据的不同TTS系统。我们的结果与经过预测MOS得分的最先进模型相比有利。
translated by 谷歌翻译
可靠的自动可读性评估方法有可能影响各种领域,从机器翻译到自我信息学习。最近,用于德语语言的大型语言模型(例如Gbert和GPT-2-Wechsel)已获得,从而可以开发基于深度学习的方法,有望进一步改善自动可读性评估。在这项贡献中,我们研究了精细调整Gbert和GPT-2-Wechsel模型的合奏能够可靠地预测德国句子的可读性的能力。我们将这些模型与语言特征相结合,并研究了预测性能对整体大小和组成的依赖性。 Gbert和GPT-2-Wechsel的混合合奏表现要比仅由Gbert或GPT-2-Wechsel模型组成的相同大小的合奏表现更好。我们的模型在2022年的Germeval 2022中进行了评估,该任务是关于德国句子数据的文本复杂性评估。在样本外数据上,我们的最佳合奏达到了均方根误差为0.435。
translated by 谷歌翻译
本文报告了基准数据驱动的自动共鸣手势生成的第二个基因挑战。参与的团队使用相同的语音和运动数据集来构建手势生成系统。所有这些系统生成的运动都使用标准化的可视化管道将视频渲染到视频中,并在几个大型众包用户研究中进行了评估。与比较不同的研究论文不同,结果差异仅是由于方法之间的差异,从而实现了系统之间的直接比较。今年的数据集基于18个小时的全身运动捕获,包括手指,参与二元对话的不同人。十个团队参加了两层挑战:全身和上身手势。对于每个层,我们都评估了手势运动的人类风格及其对特定语音信号的适当性。我们的评估使人类的忠诚度与手势适当性解脱,这是该领域的主要挑战。评估结果是一场革命和启示。某些合成条件被评为比人类运动捕获更明显的人类样。据我们所知,这从未在高保真的头像上展示过。另一方面,发现所有合成运动比原始运动捕获记录要小得多。其他材料可通过项目网站https://youngwoo-yoon.github.io/geneachallenge2022/获得
translated by 谷歌翻译