Pansharpening使用高空间分辨率Panchromatic图像的特征增强了高光谱分辨率多光谱图像的空间细节。有许多传统的pansharpening方法,但是产生表现出高光谱和空间保真度的图像仍然是一个空旷的问题。最近,深度学习已被用来产生有希望的Pansharped图像。但是,这些方法中的大多数通过使用相同的网络进行特征提取,对多光谱和全球性图像都采用了类似的处理。在这项工作中,我们提出了一个新型的基于双重注意的两流网络。首先使用两个单独的网络进行两个图像的特征提取,这是一种具有注意机制的编码器,可重新校准提取的功能。接下来是融合的特征,形成喂入图像重建网络的紧凑表示形式以产生pansharped图像。使用标准定量评估指标和视觉检查的PL \'{E} IADES数据集的实验结果表明,就Pansharped图像质量而言,所提出的方法比其他方法更好。
translated by 谷歌翻译
需要连续监测足部溃疡愈合,以确保给定治疗的功效并避免任何恶化。脚下溃疡分割是伤口诊断的重要步骤。我们开发了一种模型,其精神与良好的编码器编码器和残留卷积神经网络相似。我们的模型包括剩余的连接以及在每个卷积块中集成的通道和空间注意力。一种基于贴剂训练,测试时间增加以及对获得预测的多数投票的简单方法,导致了卓越的性能。我们的模型没有利用任何容易获得的骨干架构,在类似的外部数据集或任何转移学习技术上进行预训练。与用于足球溃疡细分任务的可用最新模型相比,网络参数的总数约为500万,这使其成为一个显着的轻巧模型。我们的实验在斑块级和图像级别上呈现了结果。我们的模型应用于Miccai 2021的公开脚步溃疡细分(Fuseg)挑战数据集,就骰子相似性得分而言,最先进的图像级绩效为88.22%,在官方挑战排行榜中排名第二。我们还展示了一个非常简单的解决方案,可以将其与更高级的体系结构进行比较。
translated by 谷歌翻译
由于它们的蔓延越来越多,对神经网络预测的信心变得越来越重要。然而,基本的神经网络不会透露确定性估计或遭受超过或置信度。许多研究人员一直在努力了解和量化神经网络预测中的不确定性。结果,已经提出了已经确定了不同类型和不确定性的来源,并且已经提出了一种测量和量化神经网络中不确定性的各种方法。这项工作概述了神经网络中的不确定性估计,评论最近领域的进步,突出了当前的挑战,并确定了潜在的研究机会。它旨在向任何兴趣在神经网络中的不确定性估计感兴趣的概述和介绍,而无需预先展现在该领域的先验知识。给出了对最关键的不确定性来源的全面介绍,并分离到可还原的模型不确定性,并提出了未降低的数据不确定性。基于确定性神经网络,贝叶斯神经网络,神经网络集合的这些不确定性和测试时间数据增强方法的建模以及这些领域的不同分支以及讨论了最新的发展。对于实际应用,我们讨论了不同的不确定性措施,校准神经网络的方法,并概述现有基线和实施。来自不同领域的广泛挑战的不同示例概念了关于实际应用中不确定性的需求和挑战。此外,讨论了当前特派团和安全关键现实世界应用程序的实际限制,并讨论了对更广泛使用此类方法的下一个步骤的展望。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
In this paper, we reduce the complexity of approximating the correlation clustering problem from $O(m\times\left( 2+ \alpha (G) \right)+n)$ to $O(m+n)$ for any given value of $\varepsilon$ for a complete signed graph with $n$ vertices and $m$ positive edges where $\alpha(G)$ is the arboricity of the graph. Our approach gives the same output as the original algorithm and makes it possible to implement the algorithm in a full dynamic setting where edge sign flipping and vertex addition/removal are allowed. Constructing this index costs $O(m)$ memory and $O(m\times\alpha(G))$ time. We also studied the structural properties of the non-agreement measure used in the approximation algorithm. The theoretical results are accompanied by a full set of experiments concerning seven real-world graphs. These results shows superiority of our index-based algorithm to the non-index one by a decrease of %34 in time on average.
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Finding and localizing the conceptual changes in two scenes in terms of the presence or removal of objects in two images belonging to the same scene at different times in special care applications is of great significance. This is mainly due to the fact that addition or removal of important objects for some environments can be harmful. As a result, there is a need to design a program that locates these differences using machine vision. The most important challenge of this problem is the change in lighting conditions and the presence of shadows in the scene. Therefore, the proposed methods must be resistant to these challenges. In this article, a method based on deep convolutional neural networks using transfer learning is introduced, which is trained with an intelligent data synthesis process. The results of this method are tested and presented on the dataset provided for this purpose. It is shown that the presented method is more efficient than other methods and can be used in a variety of real industrial environments.
translated by 谷歌翻译
Simulation-based falsification is a practical testing method to increase confidence that the system will meet safety requirements. Because full-fidelity simulations can be computationally demanding, we investigate the use of simulators with different levels of fidelity. As a first step, we express the overall safety specification in terms of environmental parameters and structure this safety specification as an optimization problem. We propose a multi-fidelity falsification framework using Bayesian optimization, which is able to determine at which level of fidelity we should conduct a safety evaluation in addition to finding possible instances from the environment that cause the system to fail. This method allows us to automatically switch between inexpensive, inaccurate information from a low-fidelity simulator and expensive, accurate information from a high-fidelity simulator in a cost-effective way. Our experiments on various environments in simulation demonstrate that multi-fidelity Bayesian optimization has falsification performance comparable to single-fidelity Bayesian optimization but with much lower cost.
translated by 谷歌翻译
Ensemble learning combines results from multiple machine learning models in order to provide a better and optimised predictive model with reduced bias, variance and improved predictions. However, in federated learning it is not feasible to apply centralised ensemble learning directly due to privacy concerns. Hence, a mechanism is required to combine results of local models to produce a global model. Most distributed consensus algorithms, such as Byzantine fault tolerance (BFT), do not normally perform well in such applications. This is because, in such methods predictions of some of the peers are disregarded, so a majority of peers can win without even considering other peers' decisions. Additionally, the confidence score of the result of each peer is not normally taken into account, although it is an important feature to consider for ensemble learning. Moreover, the problem of a tie event is often left un-addressed by methods such as BFT. To fill these research gaps, we propose PoSw (Proof of Swarm), a novel distributed consensus algorithm for ensemble learning in a federated setting, which was inspired by particle swarm based algorithms for solving optimisation problems. The proposed algorithm is theoretically proved to always converge in a relatively small number of steps and has mechanisms to resolve tie events while trying to achieve sub-optimum solutions. We experimentally validated the performance of the proposed algorithm using ECG classification as an example application in healthcare, showing that the ensemble learning model outperformed all local models and even the FL-based global model. To the best of our knowledge, the proposed algorithm is the first attempt to make consensus over the output results of distributed models trained using federated learning.
translated by 谷歌翻译