今天的电信网络已成为大量广泛异构数据的来源。该信息可以从网络交通轨迹,网络警报,信号质量指示符,用户行为数据等中检索。需要高级数学工具从这些数据中提取有意义的信息,并从网络生成的数据中做出与网络的正常运行有关的决策。在这些数学工具中,机器学习(ML)被认为是执行网络数据分析和实现自动网络自配置和故障管理的最具前景的方法之一。 ML技术在光通信网络领域的应用受到光网络在最近几年所面临的网络复杂性的前所未有的增长的推动。这种复杂性的增加是由于引入了一系列可调和相互依赖的系统参数(例如,路由配置,调制格式,符号率,编码方案等),这些参数通过使用相干传输/接收技术,高级数字信号处理和光纤传播中非线性效应的补偿。在本文中,我们概述了ML在光通信和网络中的应用。我们对涉及该主题的相关文献进行分类和调查,并且我们还为对该领域感兴趣的研究人员和从业者提供了ML的入门教程。虽然最近出现了大量的研究论文,但ML光学网络的应用仍处于起步阶段:为了激发这一领域的进一步工作,我们总结了该论文提出了新的可能的研究方向。
translated by 谷歌翻译
由于最近在处理速度和数据采集和存储方面的进步,机器学习(ML)正在渗透我们生活的方方面面,并从根本上改变了许多领域的研究。无线通信是另一个成功的故事 - 在我们的生活中无处不在,从手持设备到可穿戴设备,智能家居和汽车。虽然近年来在为各种无线通信问题利用ML工具方面看到了一系列研究活动,但这些技术对实际通信系统和标准的影响还有待观察。在本文中,我们回顾了无线通信系统中ML的主要承诺和挑战,主要关注物理层。我们提出了ML技术在经典方法方面取得的一些最令人瞩目的近期成就,并指出了有希望的研究方向,其中ML可能在不久的将来产生最大的影响。我们还强调了在无线网络边缘设计物理层技术以实现分布式ML的重要问题,这进一步强调了理解和连接ML与无线通信中的基本概念的需要。
translated by 谷歌翻译
声学数据提供从生物学和通信到海洋和地球科学等领域的科学和工程见解。我们调查了机器学习(ML)的进步和变革潜力,包括声学领域的深度学习。 ML是用于自动检测和利用模式印度的广泛的统计技术家族。相对于传统的声学和信号处理,ML是数据驱动的。给定足够的训练数据,ML可以发现特征之间的复杂关系。通过大量的训练数据,ML candiscover模型描述复杂的声学现象,如人类语音和混响。声学中的ML正在迅速发展,具有令人瞩目的成果和未来的重大前景。我们首先介绍ML,然后在五个声学研究领域强调MLdevelopments:语音处理中的源定位,海洋声学中的源定位,生物声学,地震探测和日常场景中的环境声音。
translated by 谷歌翻译
Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.
translated by 谷歌翻译
我们提出并讨论了物理层深度学习的几种新应用。通过将通信系统解释为自动编码器,我们开发了一种将通信系统设计视为端到端重建任务的基本新方法,该任务旨在在单个过程中联合优化发送器和接收器组件。我们展示了如何将这种想法扩展到多个发射器和接收器的网络,并将无线电变压器网络的概念作为在机器学习模型中结合专家领域知识的手段。最后,我们展示了卷积神经网络在原始IQ样本上的应用,用于调制分类,相对于依赖于专家特征的传统方案,它实现了竞争准确性。本文最后讨论了未来调查的开放性挑战和领域。
translated by 谷歌翻译
随着无线网络向高移动性发展并为连接的车辆提供更好的支持,由于车辆环境中的高动态性以及传统无线设计方法的动机重新思考,出现了许多新的挑战。未来的智能车辆是高移动性网络的核心,它们越来越多地配备了多种先进的车载传感器,并不断产生大量数据。机器学习作为一种有效的人工智能方法,可以提供丰富的工具来利用这些数据。网络的好处。在本文中,我们首先确定高机动性车辆网络的独特特征,并激发机器学习的使用以应对由此带来的挑战。在简要介绍了机器学习的主要概念后,我们将讨论其应用,以了解车辆网络的动态,并做出明智的决策以优化网络性能。特别是,我们将更详细地讨论重新授权学习在管理网络资源中的应用,以替代普遍优化方法。最后,强调了一些值得进一步研究的未决问题。
translated by 谷歌翻译
本文介绍了深度增强学习在通信和网络中应用的综合文献综述。现代网络,例如物联网(IoT)和无人驾驶飞行器(UAV)网络,变得更加分散和自主。在这样的网络中,网络实体需要在本地制定决策以在网络环境的不确定性下最大化网络性能。强化学习已被有效地用于使网络实体能够获得最优策略,包括例如决策,当状态和动作空间很小时给定它们的状态。然而,在复杂和大规模网络中,状态和动作空间通常是大,加强学习可能无法在合理的时间内找到最优政策。因此,深入强化学习,强化学习与深度学习的结合,已经发展到克服缺点。在本次调查中,我们首先提供从基本概念到高级模型的深度强化学习教程。然后,提出深入的强化学习方法,以解决通信和网络中的新兴问题。问题包括动态网络访问,数据速率控制,无线缓存,数据卸载,网络安全和连接保存等,这对下一代网络(如5G及更高版本)都非常重要。此外,我们还介绍了深度增强学习在流量路由,资源共享和数据收集方面的应用。最后,我们重点介绍了应用深层强化学习的重要挑战,开放性问题和未来研究方向。
translated by 谷歌翻译
Artificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.
translated by 谷歌翻译
物联网(IoT)集成了数十亿个智能设备,这些设备可以通过最少的人为干预相互通信。它是计算史上发展最快的领域之一,到2020年底估计有500亿台设备。一方面,物联网在增强几种可提高生活质量的真实智能应用方面起着至关重要的作用。另一方面此外,物联网系统的横切性质以及涉及此类系统部署的多学科组件引入了新的安全挑战。对物联网设备及其固有漏洞实施安全措施,如加密,认证,访问控制,网络安全和应用安全是无效的。因此,应加强现有的安全方法,以有效保护物联网系统。机器学习和深度学习(ML / DL)在过去几年中取得了令人瞩目的进步,机器智能已经从实验室好奇心转变为实用机械的几个重要应用。因此,ML / DL方法对于将物联网系统的这些安全性转变为仅仅促进设备与基于安全的智能系统之间的安全通信非常重要。这项工作的目标是提供对ML / DL方法的全面调查,可用于开发物联网系统的增强安全方法。提出了与固有或新引入的威胁相关的物联网安全威胁,并讨论了各种潜在的物联网系统攻击面以及与每个表面相关的可能威胁。然后,我们彻底审查了物联网安全的ML / DL方法,并提出了每种方法的机会,优点和缺点。讨论将ML / DL应用于IoTsecurity所涉及的机遇和挑战。这些机遇和挑战可以作为潜在的未来研究方向。
translated by 谷歌翻译
The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations.
translated by 谷歌翻译
Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future. Index Terms-Machine learning, machine intelligence, artificial neural network, deep learning, deep belief system, network traffic control, routing.
translated by 谷歌翻译
In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely Deep Learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.
translated by 谷歌翻译
软件定义网络(SDN)代表了一种有前途的网络架构,它结合了中央管理和网络可编程性。 SDN从数据平面分离控制平面,并将网络管理移动到称为控制器的中心点,该中心点可以被​​编程并用作网络的大脑。最近,研究界越来越倾向于从人工智能(AI)领域的最新进展中受益,以提供SDN中的学习能力和更好的决策。在本研究中,我们详细介绍了将人工智能纳入SDN的努力。我们的研究表明,研究工作主要集中在AI的三个主要子领域:机器学习,元启发式和模糊推理系统。因此,在这项工作中,我们研究了它们的不同应用领域和潜在用途,以及通过在SDN范例中包含基于AI的技术所实现的改进。
translated by 谷歌翻译
Automatic Speech Recognition (ASR) has historically been a driving force behind many machine learning (ML) techniques, including the ubiquitously used hidden Markov model, discriminative learning, structured sequence learning, Bayesian learning, and adaptive learning. Moreover, ML can and occasionally does use ASR as a large-scale, realistic application to rigorously test the effectiveness of a given technique, and to inspire new problems arising from the inherently sequential and dynamic nature of speech. On the other hand, even though ASR is available commercially for some applications, it is largely an unsolved problem-for almost all applications, the performance of ASR is not on par with human performance. New insight from modern ML methodology shows great promise to advance the state-of-the-art in ASR technology. This overview article provides readers with an overview of modern ML techniques as utilized in the current and as relevant to future ASR research and systems. The intent is to foster further cross-pollination between the ML and ASR communities than has occurred in the past. The article is organized according to the major ML paradigms that are either popular already or have potential for making significant contributions to ASR technology. The paradigms presented and elaborated in this overview include: generative and discriminative learning; supervised, unsupervised, semi-supervised, and active learning; adaptive and multi-task learning; and Bayesian learning. These learning paradigms are motivated and discussed in the context of ASR technology and applications. We finally present and analyze recent developments of deep learning and learning with sparse representations, focusing on their direct relevance to advancing ASR technology.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
提出了一种利用深度学习平衡自动编码器生成建模功能设计端到端通信系统的方法。采用深度神经网络(DNNs)设计系统模型,利用变分推导得到优化这些模型的目标函数。通过实验验证,所提出的方法显示出在错误率性能方面始终如一地产生更好的模型以及与先前工作相比的星座包装密度。
translated by 谷歌翻译
Driven by the visions of Internet of Things and 5G communications, recentyears have seen a paradigm shift in mobile computing, from the centralizedMobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature ofMEC is to push mobile computing, network control and storage to the networkedges (e.g., base stations and access points) so as to enablecomputation-intensive and latency-critical applications at the resource-limitedmobile devices. MEC promises dramatic reduction in latency and mobile energyconsumption, tackling the key challenges for materializing 5G vision. Thepromised gains of MEC have motivated extensive efforts in both academia andindustry on developing the technology. A main thrust of MEC research is toseamlessly merge the two disciplines of wireless communications and mobilecomputing, resulting in a wide-range of new designs ranging from techniques forcomputation offloading to network architectures. This paper provides acomprehensive survey of the state-of-the-art MEC research with a focus on jointradio-and-computational resource management. We also present a research outlookconsisting of a set of promising directions for MEC research, including MECsystem deployment, cache-enabled MEC, mobility management for MEC, green MEC,as well as privacy-aware MEC. Advancements in these directions will facilitatethe transformation of MEC from theory to practice. Finally, we introduce recentstandardization efforts on MEC as well as some typical MEC applicationscenarios.
translated by 谷歌翻译
无人驾驶飞行器(UAV)最近迅速发展,以促进广泛的创新应用,这些应用可以从根本上改变设计的道路物理系统(CPS)。 CPS是现代一代的系统,具有计算和物理潜力之间的协同合作,可以通过几种新机制与人类进行交互。在CPS应用中使用无人机的主要优势在于其独特的功能,包括其移动性,动态性,轻松部署,自适应高度,灵活性,可调节性以及随时随地有效评估现实世界的功能。此外,从技术角度来看,预计无人机将成为先进CPS发展的重要元素。因此,在本次调查中,我们的目标是确定用于CPS应用的多无人机系统的最基本和最重要的设计挑战。我们突出了关键和多功能方面,涵盖目标和基础设施对象的覆盖和跟踪,节能导航和使用机器学习的细粒度CPS应用程序的图像分析。还研究了关键原型和测试平台,以展示这些实用技术如何促进CPS应用。我们提出并提出最先进的算法,用定量和定性方法解决设计挑战,并将这些挑战与重要的CPS应用相结合,以便对每种应用的挑战得出深刻的结论。最后,我们总结了可能影响这些领域未来研究的潜在新方向和想法。
translated by 谷歌翻译
Next-generation wireless networks are expected to support extremely high data rates and radically new applications, which require a new wireless radio technology paradigm. The challenge is that of assisting the radio in intelligent adaptive learning and decision making, so that the diverse requirements of next-generation wireless networks can be satisfied. Machine learning is one of the most promising artificial intelligence tools, conceived to support smart radio terminals. Future smart 5G mobile terminals are expected to autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning and inference, in order to control the transmission power, while relying on energy efficiency learning/inference and simultaneously adjusting the transmission protocols with the aid of quality of service learning/infer-ence. Hence we briefly review the rudimentary concepts of machine learning and propose their employment in the compelling applications of 5G networks, including cognitive radios, massive MIMOs, femto/small cells, heterogeneous networks , smart grid, energy harvesting, device-to-device communications, and so on. Our goal is to assist the readers in refining the motivation, problem formulation, and methodology of powerful machine learning algorithms in the context of future networks in order to tap into hitherto unexplored applications and services.
translated by 谷歌翻译
机器学习算法的成功通常取决于数据表示,我们假设这是因为不同的表示可以或多或少地隐藏数据背后变异的不同解释因素。虽然可以使用特定领域知识来帮助设计表示,但也可以使用通用先验学习,并且对AI的追求正在激励设计实现这些先验的更强大的表示 - 学习算法。本文回顾了无监督特征学习和深度学习领域的最新研究成果,涵盖了概率模型,自动编码器,流形学习和深度网络的进步。这激发了关于学习良好表征,计算表示(即推理)以及表示学习,密度估计和流形学习之间的几何联系的适当目标的长期未回答的问题。
translated by 谷歌翻译