Pairwise compatibility measure (CM) is a key component in solving the jigsaw puzzle problem (JPP) and many of its recently proposed variants. With the rapid rise of deep neural networks (DNNs), a trade-off between performance (i.e., accuracy) and computational efficiency has become a very significant issue. Whereas an end-to-end DNN-based CM model exhibits high performance, it becomes virtually infeasible on very large puzzles, due to its highly intensive computation. On the other hand, exploiting the concept of embeddings to alleviate significantly the computational efficiency, has resulted in degraded performance, according to recent studies. This paper derives an advanced CM model (based on modified embeddings and a new loss function, called hard batch triplet loss) for closing the above gap between speed and accuracy; namely a CM model that achieves SOTA results in terms of performance and efficiency combined. We evaluated our newly derived CM on three commonly used datasets, and obtained a reconstruction improvement of 5.8% and 19.5% for so-called Type-1 and Type-2 problem variants, respectively, compared to best known results due to previous CMs.
translated by 谷歌翻译
This paper introduces the novel CNN-based encoder Twin Embedding Network (TEN), for the jigsaw puzzle problem (JPP), which represents a puzzle piece with respect to its boundary in a latent embedding space. Combining this latent representation with a simple distance measure, we demonstrate improved accuracy levels of our newly proposed pairwise compatibility measure (CM), compared to that of various classical methods, for degraded puzzles with eroded tile boundaries. We focus on this problem instance for our case study, as it serves as an appropriate testbed for real-world scenarios. Specifically, we demonstrated an improvement of up to 8.5% and 16.8% in reconstruction accuracy, for so-called Type-1 and Type-2 problem variants, respectively. Furthermore, we also demonstrated that TEN is faster by a few orders of magnitude, on average, than a typical deep neural network (NN) model, i.e., it is as fast as the classical methods. In this regard, the paper makes a significant first attempt at bridging the gap between the relatively low accuracy (of classical methods and the intensive computational complexity (of NN models), for practical, real-world puzzle-like problems.
translated by 谷歌翻译
学习遥感(RS)图像之间的相似性形成基于内容的RS图像检索(CBIR)的基础。最近,将图像的语义相似性映射到嵌入(度量标准)空间的深度度量学习方法已经发现非常流行。学习公制空间的常见方法依赖于将与作为锚称为锚的参考图像的类似(正)和不同(负)图像的三胞胎的选择。选择三胞胎是一个难以为多标签RS CBIR的困难任务,其中每个训练图像由多个类标签注释。为了解决这个问题,在本文中,我们提出了一种在为多标签RS CBIR问题定义的深神经网络(DNN)的框架中提出了一种新颖的三联样品采样方法。该方法基于两个主要步骤选择一小部分最多代表性和信息性三元组。在第一步中,使用迭代算法从当前迷你批量选择在嵌入空间中彼此多样化的一组锚。在第二步中,通过基于新颖的策略评估彼此之间的图像的相关性,硬度和多样性来选择不同的正面和负图像。在两个多标签基准档案上获得的实验结果表明,在DNN的上下文中选择最具信息丰富和代表性的三胞胎,导致:i)降低DNN训练阶段的计算复杂性,而性能没有任何显着损失; ii)由于信息性三元组允许快速收敛,因此学习速度的增加。所提出的方法的代码在https://git.tu-berlin.de/rsim/image-reetrieval-from-tropls上公开使用。
translated by 谷歌翻译
手写数字识别(HDR)是光学特征识别(OCR)领域中最具挑战性的任务之一。不管语言如何,HDR都存在一些固有的挑战,这主要是由于个人跨个人的写作风格的变化,编写媒介和环境的变化,无法在反复编写任何数字等时保持相同的笔触。除此之外,特定语言数字的结构复杂性可能会导致HDR的模棱两可。多年来,研究人员开发了许多离线和在线HDR管道,其中不同的图像处理技术与传统的机器学习(ML)基于基于的和/或基于深度学习(DL)的体系结构相结合。尽管文献中存在有关HDR的广泛审查研究的证据,例如:英语,阿拉伯语,印度,法尔西,中文等,但几乎没有对孟加拉人HDR(BHDR)的调查,这缺乏对孟加拉语HDR(BHDR)的研究,而这些调查缺乏对孟加拉语HDR(BHDR)的研究。挑战,基础识别过程以及可能的未来方向。在本文中,已经分析了孟加拉语手写数字的特征和固有的歧义,以及二十年来最先进的数据集的全面见解和离线BHDR的方法。此外,还详细讨论了一些涉及BHDR的现实应用特定研究。本文还将作为对离线BHDR背后科学感兴趣的研究人员的汇编,煽动了对相关研究的新途径的探索,这可能会进一步导致在不同应用领域对孟加拉语手写数字进行更好的离线认识。
translated by 谷歌翻译
在本文中,我们提出了一种强大的样本生成方案来构建信息性三联网。所提出的硬样品生成是一种两级合成框架,通过两个阶段的有效正和负样品发生器产生硬样品。第一阶段将锚定向对具有分段线性操作,通过巧妙地设计条件生成的对抗网络来提高产生的样本的质量,以降低模式崩溃的风险。第二阶段利用自适应反向度量约束来生成最终的硬样本。在几个基准数据集上进行广泛的实验,验证了我们的方法比现有的硬样生成算法达到卓越的性能。此外,我们还发现,我们建议的硬样品生成方法结合现有的三态挖掘策略可以进一步提高深度度量学习性能。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
基因组工程正在进行前所未有的发展,现在已广泛可用。为确保负责任的生物技术创新并减少滥用工程DNA序列,为识别工程型质粒的起源实验室来说是至关重要的。基因工程归因(GEA),制定序列实验室协会的能力将支持这一过程中的法医专家。在这里,我们提出了一种基于度量学习的方法,该方法将最可能的原产实验室排名,同时为质粒序列和实验室产生嵌入。这些嵌入物可用于执行各种下游任务,例如聚类DNA序列和实验室,以及在机器学习模型中使用它们作为特征。我们的方法采用了循环转移增强方法,能够在前10个预测中正确地将原产于原产的90亿美元的时间排列 - 优于所有最新的最先进的方法。我们还证明我们可以使用只需10次\%$ 10 \%$ of序列进行几次拍摄学习并获得76±10美元的准确性。这意味着,我们仅使用第十个数据表达先前的CNN方法。我们还证明我们能够在特定实验室中提取质粒序列中的关键签名,允许对模型的产出进行可解释的检查。
translated by 谷歌翻译
拼图解决问题,从一组非重叠的无序视觉碎片构建一个连贯的整体,是许多应用的基础,然而,过去二十年的大部分文献都集中在较不太现实的谜题上正方形。在这里,我们正规化一种新型的拼图拼图,其中碎片是通过用任意数量的直切割的全局多边形/图像切割而产生的一般凸多边形,这是由庆祝的懒人辅助er序列的产生模型。我们分析了这种难题的理论特性,包括在碎片被几何噪声被污染时解决它们的固有挑战。为了应对此类困难并获得易行的解决方案,我们摘要作为一种具有分层循环约束和分层重建过程的多体弹簧质量动态系统的问题。我们定义了评估指标,并在普通植物和图案谜题上呈现实验结果,以表明它们是完全自动溶解的。
translated by 谷歌翻译
本文提出了一种基于生成的对抗网络(GAN)的解决方案,用于求解拼图游戏。问题假定图像被分成相等的方块,并要求根据碎片提供的信息恢复图像。传统的拼图拼写求解器通常根据拼写的边界确定关系,这忽略了重要的语义信息。在本文中,我们提出了一种基于GaN的辅助学习方法,用于用未配对的图像求解拼图拼图的GaN的辅助学习方法(没有初始图像的先验知识)。我们设计了一个多任务管道,包括(1)分类分支来对拼图排列,并且(2)GaN分支以正确的顺序恢复图像的图像。分类分支由根据洗片件产生的伪标签约束。 GaN分支专注于图像语义信息,其中发电机产生自然图像以欺骗鉴别器,而判别器区分给定图像是否属于合成或真实目标域。这两个分支通过流动的扭曲模块连接,该模块应用于扭曲特征以根据分类结果校正订单。所提出的方法可以通过同时利用语义信息和边界信息来更有效地解决拼图难题。针对几个代表性拼图求解器的定性和定量比较证明了我们方法的优越性。
translated by 谷歌翻译
近年来,已经产生了大量的视觉内容,并从许多领域共享,例如社交媒体平台,医学成像和机器人。这种丰富的内容创建和共享引入了新的挑战,特别是在寻找类似内容内容的图像检索(CBIR)-A的数据库中,即长期建立的研究区域,其中需要改进的效率和准确性来实时检索。人工智能在CBIR中取得了进展,并大大促进了实例搜索过程。在本调查中,我们审查了最近基于深度学习算法和技术开发的实例检索工作,通过深网络架构类型,深度功能,功能嵌入方法以及网络微调策略组织了调查。我们的调查考虑了各种各样的最新方法,在那里,我们识别里程碑工作,揭示各种方法之间的联系,并呈现常用的基准,评估结果,共同挑战,并提出未来的未来方向。
translated by 谷歌翻译
Despite significant recent advances in the field of face recognition [10,14,15,17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches. To train, we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method. The benefit of our approach is much greater representational efficiency: we achieve state-of-the-art face recognition performance using only 128-bytes per face.On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves 95.12%. Our system cuts the error rate in comparison to the best published result [15] by 30% on both datasets.We also introduce the concept of harmonic embeddings, and a harmonic triplet loss, which describe different versions of face embeddings (produced by different networks) that are compatible to each other and allow for direct comparison between each other.
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.
translated by 谷歌翻译
语义图像分割是手术中的背景知识和自治机器人的重要前提。本领域的状态专注于在微创手术期间获得的传统RGB视频数据,但基于光谱成像数据的全景语义分割并在开放手术期间获得几乎没有注意到日期。为了解决文献中的这种差距,我们正在研究基于在开放手术环境中获得的猪的高光谱成像(HSI)数据的以下研究问题:(1)基于神经网络的HSI数据的充分表示是完全自动化的器官分割,尤其是关于数据的空间粒度(像素与Superpixels与Patches与完整图像)的空间粒度? (2)在执行语义器官分割时,是否有利用HSI数据使用HSI数据,即RGB数据和处理的HSI数据(例如氧合等组织参数)?根据基于20猪的506个HSI图像的全面验证研究,共注释了19个类,基于深度的学习的分割性能 - 贯穿模态 - 与输入数据的空间上下文一致。未处理的HSI数据提供优于RGB数据或来自摄像机提供商的处理数据,其中优势随着输入到神经网络的输入的尺寸而增加。最大性能(应用于整个图像的HSI)产生了0.89(标准偏差(SD)0.04)的平均骰子相似度系数(DSC),其在帧间间变异性(DSC为0.89(SD 0.07)的范围内。我们得出结论,HSI可以成为全自动手术场景理解的强大的图像模型,其具有传统成像的许多优点,包括恢复额外功能组织信息的能力。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
Block based motion estimation is integral to inter prediction processes performed in hybrid video codecs. Prevalent block matching based methods that are used to compute block motion vectors (MVs) rely on computationally intensive search procedures. They also suffer from the aperture problem, which can worsen as the block size is reduced. Moreover, the block matching criteria used in typical codecs do not account for the resulting levels of perceptual quality of the motion compensated pictures that are created upon decoding. Towards achieving the elusive goal of perceptually optimized motion estimation, we propose a search-free block motion estimation framework using a multi-stage convolutional neural network, which is able to conduct motion estimation on multiple block sizes simultaneously, using a triplet of frames as input. This composite block translation network (CBT-Net) is trained in a self-supervised manner on a large database that we created from publicly available uncompressed video content. We deploy the multi-scale structural similarity (MS-SSIM) loss function to optimize the perceptual quality of the motion compensated predicted frames. Our experimental results highlight the computational efficiency of our proposed model relative to conventional block matching based motion estimation algorithms, for comparable prediction errors. Further, when used to perform inter prediction in AV1, the MV predictions of the perceptually optimized model result in average Bjontegaard-delta rate (BD-rate) improvements of -1.70% and -1.52% with respect to the MS-SSIM and Video Multi-Method Assessment Fusion (VMAF) quality metrics, respectively as compared to the block matching based motion estimation system employed in the SVT-AV1 encoder.
translated by 谷歌翻译
大坝水库在实现可持续发展目标和全球气候目标方面发挥着重要作用。但是,特别是对于小型水坝水库,其地理位置缺乏一致的数据。为了解决此数据差距,一种有前途的方法是根据全球可用的遥感图像进行自动水坝水库提取。它可以被认为是水体提取的精细颗粒任务,涉及在图像中提取水区,然后将水坝储层与天然水体分开。我们提出了一种基于新型的深神经网络(DNN)管道,该管道将大坝水库提取到水体分割和大坝储层识别中。首先将水体与分割模型中的背景土地分开,然后将每个水体预测为大坝储层或分类模型中的天然水体。对于以前的一步,将跨图像的点级度量学习注入分段模型,以解决水域和土地区域之间的轮廓模棱两可。对于后一个步骤,将带有簇的三重态的先前引导的度量学习注入到分类模型中,以根据储层簇在细粒度中优化图像嵌入空间。为了促进未来的研究,我们建立了一个带有地球图像数据的基准数据集,并从西非和印度的河流盆地标记为人类标记的水库。在水体分割任务,水坝水库识别任务和关节坝储层提取任务中,对这个基准进行了广泛的实验。将我们的方法与艺术方法的方法进行比较时,已经在各自的任务中观察到了卓越的性能。
translated by 谷歌翻译