事实证明,超复杂的神经网络可以减少参数的总数,同时通过利用Clifford代数的特性来确保有价值的性能。最近,通过涉及有效的参数化kronecker产品,超复合线性层得到了进一步改善。在本文中,我们定义了超复杂卷积层的参数化,并介绍了轻巧有效的大型大型模型的参数化超复杂神经网络(PHNN)。我们的方法直接从数据中掌握了卷积规则和过滤器组织,而无需遵循严格的预定义域结构。 Phnns可以灵活地在任何用户定义或调谐域中操作,无论代数规则是否是预设的,从1D到$ n $ d。这样的锻造性允许在其自然域中处理多维输入,而无需吞并进一步的尺寸,而是在Quaternion神经网络中使用3D输入(例如颜色图像)。结果,拟议中的Phnn家族以$ 1/n $的参数运行,因为其在真实域中的类似物。我们通过在各种图像数据集上执行实验以及音频数据集证明了这种方法对应用程序多个域的多功能性,在这些实验中,我们的方法的表现优于真实和Quaternion值值。完整代码可在以下网址获得:https://github.com/elegan23/hypernets。
translated by 谷歌翻译
Spatial audio methods are gaining a growing interest due to the spread of immersive audio experiences and applications, such as virtual and augmented reality. For these purposes, 3D audio signals are often acquired through arrays of Ambisonics microphones, each comprising four capsules that decompose the sound field in spherical harmonics. In this paper, we propose a dual quaternion representation of the spatial sound field acquired through an array of two First Order Ambisonics (FOA) microphones. The audio signals are encapsulated in a dual quaternion that leverages quaternion algebra properties to exploit correlations among them. This augmented representation with 6 degrees of freedom (6DOF) involves a more accurate coverage of the sound field, resulting in a more precise sound localization and a more immersive audio experience. We evaluate our approach on a sound event localization and detection (SELD) benchmark. We show that our dual quaternion SELD model with temporal convolution blocks (DualQSELD-TCN) achieves better results with respect to real and quaternion-valued baselines thanks to our augmented representation of the sound field. Full code is available at: https://github.com/ispamm/DualQSELD-TCN.
translated by 谷歌翻译
Traditionally, deep learning methods for breast cancer classification perform a single-view analysis. However, radiologists simultaneously analyze all four views that compose a mammography exam, owing to the correlations contained in mammography views, which present crucial information for identifying tumors. In light of this, some studies have started to propose multi-view methods. Nevertheless, in such existing architectures, mammogram views are processed as independent images by separate convolutional branches, thus losing correlations among them. To overcome such limitations, in this paper we propose a novel approach for multi-view breast cancer classification based on parameterized hypercomplex neural networks. Thanks to hypercomplex algebra properties, our networks are able to model, and thus leverage, existing correlations between the different views that comprise a mammogram, thus mimicking the reading process performed by clinicians. The proposed methods are able to handle the information of a patient altogether without breaking the multi-view nature of the exam. We define architectures designed to process two-view exams, namely PHResNets, and four-view exams, i.e., PHYSEnet and PHYBOnet. Through an extensive experimental evaluation conducted with publicly available datasets, we demonstrate that our proposed models clearly outperform real-valued counterparts and also state-of-the-art methods, proving that breast cancer classification benefits from the proposed multi-view architectures. We also assess the method's robustness beyond mammogram analysis by considering different benchmarks, as well as a finer-scaled task such as segmentation. Full code and pretrained models for complete reproducibility of our experiments are freely available at: https://github.com/ispamm/PHBreast.
translated by 谷歌翻译
哥内克人Sentinel Imagery的纯粹卷的可用性为使用深度学习的大尺度创造了新的土地利用陆地覆盖(Lulc)映射的机会。虽然在这种大型数据集上培训是一个非琐碎的任务。在这项工作中,我们试验Lulc Image分类和基准不同最先进模型的Bigearthnet数据集,包括卷积神经网络,多层感知,视觉变压器,高效导通和宽残余网络(WRN)架构。我们的目标是利用分类准确性,培训时间和推理率。我们提出了一种基于用于网络深度,宽度和输入数据分辨率的WRNS复合缩放的高效导通的框架,以有效地训练和测试不同的模型设置。我们设计一种新颖的缩放WRN架构,增强了有效的通道注意力机制。我们提出的轻量级模型具有较小的培训参数,实现所有19个LULC类的平均F分类准确度达到4.5%,并且验证了我们使用的resnet50最先进的模型速度快两倍作为基线。我们提供超过50种培训的型号,以及我们在多个GPU节点上分布式培训的代码。
translated by 谷歌翻译
我们研究复杂的缩放作为一种自然的对称性和复杂的测量和表示独特的对称性。深度复杂网络(DCN)将实值的代数扩展到复杂域,而不会解决复杂值缩放。超现实占据复杂数字的限制性歧管视图,采用距离度量来实现复杂的缩放不变性,同时丢失丰富的复合值。我们分析了复杂的缩放,作为共同领域的转换和设计新颖的具有这种特殊转换的不变神经网络层。我们还提出了RGB图像的新型复合值表示,其中复值缩放表示色调偏移或跨色通道的相关变化。在MSTAR,CIFAR10,CIFAR100和SVHN上基准测试,我们的共同域对称(CDS)分类器提供更高的准确性,更好的泛化,对共同域变换的鲁棒性,以及比DCN和超现实的更低模型偏差和方差,具有较少的参数。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
深度神经网络一直是分类任务成功的推动力,例如对象和音频识别。许多最近提出的架构似乎已经取得了令人印象深刻的结果和概括,其中大多数似乎是断开连接的。在这项工作中,我们在统一框架下对深层分类器进行了研究。特别是,我们以输入的不同程度多项式的形式表达最新的结构(例如残留和非本地网络)。我们的框架提供了有关每个模型的电感偏差的见解,并可以在其多项式性质上进行自然扩展。根据标准图像和音频分类基准评估所提出模型的功效。提出的模型的表达性既是在增加模型性能和模型压缩方面都突出的。最后,在存在有限的数据和长尾数据分布的情况下,此分类法所允许的扩展显示。我们希望这种分类法可以在现有特定领域的架构之间提供联系。源代码可在\ url {https://github.com/grigorisg9gr/polynomials-for-aigmenting-nns}中获得。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32× memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58× faster convolutional operations (in terms of number of the high precision operations) and 32× memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.
translated by 谷歌翻译
在这项工作中,我们设计了一个完全复杂的神经网络,用于虹膜识别的任务。与一般物体识别的问题不同,在实际值的神经网络可以用于提取相关特征的情况下,虹膜识别取决于从输入的虹膜纹理提取两个相位和幅度信息,以便更好地表示其生物识别内容。这需要提取和处理不能由实值神经网络有效处理的相位信息。在这方面,我们设计了一个完全复杂的神经网络,可以更好地捕获虹膜纹理的多尺度,多分辨率和多向阶段和多向阶段和幅度特征。我们展示了具有用于生成经典iRIscode的Gabor小波的提出的复合值虹膜识别网络的强烈对应关系;然而,所提出的方法使得能够为IRIS识别量身定​​制的自动复数特征学习的新能力。我们对三个基准数据集进行实验 - Nd-Crosssensor-2013,Casia-Iris-千和Ubiris.v2 - 并显示了拟议网络的虹膜识别任务的好处。我们利用可视化方案来传达复合网络的方式,与标准的实际网络相比,从虹膜纹理提取根本不同的特征。
translated by 谷歌翻译
基于变压器的语言模型利用注意机制在几乎所有自然语言处理(NLP)任务中进行大量绩效改进。在其他几个领域也广泛研究了类似的关注结构。尽管注意力机制可显着增强模型的性能,但其二次复杂性阻止了长序列的有效处理。最近的工作着重于消除计算效率低下的缺点,并表明基于变压器的模型仍然可以在没有注意力层的情况下达到竞争结果。一项开创性的研究提出了FNET,该研究将注意力层取代了变压器编码器体系结构中的傅立叶变换(FT)。 FNET通过消除注意机制的计算负担来加速训练过程,在加速训练过程的同时,实现了有关原始变压器编码器模型的竞争性能。但是,FNET模型忽略了FT的基本特性,可以利用经典信号处理,以进一步提高模型效率。我们提出了不同的方法,以有效地部署FT在变压器编码器模型中。我们提出的架构具有较少的模型参数,较短的培训时间,较少的内存使用情况以及一些额外的性能改进。我们通过对共同基准的广泛实验来证明这些改进。
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
Deep learning models operating in the complex domain are used due to their rich representation capacity. However, most of these models are either restricted to the first quadrant of the complex plane or project the complex-valued data into the real domain, causing a loss of information. This paper proposes that operating entirely in the complex domain increases the overall performance of complex-valued models. A novel, fully complex-valued learning scheme is proposed to train a Fully Complex-valued Convolutional Neural Network (FC-CNN) using a newly proposed complex-valued loss function and training strategy. Benchmarked on CIFAR-10, SVHN, and CIFAR-100, FC-CNN has a 4-10% gain compared to its real-valued counterpart, maintaining the model complexity. With fewer parameters, it achieves comparable performance to state-of-the-art complex-valued models on CIFAR-10 and SVHN. For the CIFAR-100 dataset, it achieves state-of-the-art performance with 25% fewer parameters. FC-CNN shows better training efficiency and much faster convergence than all the other models.
translated by 谷歌翻译
深度学习文献通过新的架构和培训技术不断更新。然而,尽管有一些关于随机权重的发现,但最近的研究却忽略了重量初始化。另一方面,最近的作品一直在接近网络科学,以了解训练后人工神经网络(ANN)的结构和动态。因此,在这项工作中,我们分析了随机初始化网络中神经元的中心性。我们表明,较高的神经元强度方差可能会降低性能,而较低的神经元强度方差通常会改善它。然后,提出了一种新方法,根据其强度根据优先附着(PA)规则重新连接神经元连接,从而大大降低了通过常见方法初始化的层的强度方差。从这个意义上讲,重新布线仅重新组织连接,同时保留权重的大小和分布。我们通过对图像分类进行的广泛统计分析表明,在使用简单和复杂的体系结构和学习时间表时,在大多数情况下,在培训和测试过程中,性能都会提高。我们的结果表明,除了规模外,权重的组织也与更好的初始化初始化有关。
translated by 谷歌翻译
随着我们感知增强的能力,我们正在经历从数据贫困问题的过渡,其中中心问题是缺乏相关数据,即数据越来越多的问题,其中核心问题是确定一个中的一些相关功能海洋观察。通过在重力波天体物理学中应用的激励,我们研究了从检测器及其环境中丰富的测量值收集的引力波检测器中瞬时噪声伪影的存在。我们认为,功能学习 - 从数据中优化了哪些相关功能 - 对于实现高精度至关重要。我们引入的模型将错误率降低60%以上,而不是先前使用固定的手工制作功能的最新现状。功能学习不仅有用,因为它可以提高预测任务的性能;结果提供了有关与感兴趣现象相关的模式的宝贵信息,否则这些现象将是无法发现的。在我们的应用程序中,发现与瞬态噪声相关的功能提供了有关其起源的诊断信息,并建议缓解策略。在高维环境中学习具有挑战性。通过使用各种体系结构的实验,我们确定了成功模型中的两个关键因素:稀疏性,用于在高维观测中选择相关变量;和深度,这赋予了处理复杂相互作用和相对于时间变化的鲁棒性的灵活性。我们通过对实际检测器数据进行系统的实验来说明它们的意义。我们的结果提供了对机器学习社区中常见假设的实验性佐证,并具有直接适用于提高我们感知引力波的能力以及许多其他具有类似高维,嘈杂或部分无关数据的问题的问题。
translated by 谷歌翻译