Artificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.
translated by 谷歌翻译
软件定义网络(SDN)代表了一种有前途的网络架构,它结合了中央管理和网络可编程性。 SDN从数据平面分离控制平面,并将网络管理移动到称为控制器的中心点,该中心点可以被​​编程并用作网络的大脑。最近,研究界越来越倾向于从人工智能(AI)领域的最新进展中受益,以提供SDN中的学习能力和更好的决策。在本研究中,我们详细介绍了将人工智能纳入SDN的努力。我们的研究表明,研究工作主要集中在AI的三个主要子领域:机器学习,元启发式和模糊推理系统。因此,在这项工作中,我们研究了它们的不同应用领域和潜在用途,以及通过在SDN范例中包含基于AI的技术所实现的改进。
translated by 谷歌翻译
Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.
translated by 谷歌翻译
最近在许多不同的应用领域中对机器学习(ML)技术给予了极大的关注。本文提供了ML在电力线通信(PLC)中可以做什么的愿景。我们首先简要描述ML的经典公式,并将确定性问题与通信相关的统计问题区分开来。然后,我们讨论PLC中每层的ML应用,即表征和建模,物理层算法,媒体访问控制和网络算法。最后,分析了可以从ML的使用中受益的PLC的其他应用,如网格诊断。举例说明了数字示例,以便在该刺激信号/数据处理领域中验证想法并激励未来的研究工作。
translated by 谷歌翻译
本文介绍了深度增强学习在通信和网络中应用的综合文献综述。现代网络,例如物联网(IoT)和无人驾驶飞行器(UAV)网络,变得更加分散和自主。在这样的网络中,网络实体需要在本地制定决策以在网络环境的不确定性下最大化网络性能。强化学习已被有效地用于使网络实体能够获得最优策略,包括例如决策,当状态和动作空间很小时给定它们的状态。然而,在复杂和大规模网络中,状态和动作空间通常是大,加强学习可能无法在合理的时间内找到最优政策。因此,深入强化学习,强化学习与深度学习的结合,已经发展到克服缺点。在本次调查中,我们首先提供从基本概念到高级模型的深度强化学习教程。然后,提出深入的强化学习方法,以解决通信和网络中的新兴问题。问题包括动态网络访问,数据速率控制,无线缓存,数据卸载,网络安全和连接保存等,这对下一代网络(如5G及更高版本)都非常重要。此外,我们还介绍了深度增强学习在流量路由,资源共享和数据收集方面的应用。最后,我们重点介绍了应用深层强化学习的重要挑战,开放性问题和未来研究方向。
translated by 谷歌翻译
随着无线网络向高移动性发展并为连接的车辆提供更好的支持,由于车辆环境中的高动态性以及传统无线设计方法的动机重新思考,出现了许多新的挑战。未来的智能车辆是高移动性网络的核心,它们越来越多地配备了多种先进的车载传感器,并不断产生大量数据。机器学习作为一种有效的人工智能方法,可以提供丰富的工具来利用这些数据。网络的好处。在本文中,我们首先确定高机动性车辆网络的独特特征,并激发机器学习的使用以应对由此带来的挑战。在简要介绍了机器学习的主要概念后,我们将讨论其应用,以了解车辆网络的动态,并做出明智的决策以优化网络性能。特别是,我们将更详细地讨论重新授权学习在管理网络资源中的应用,以替代普遍优化方法。最后,强调了一些值得进一步研究的未决问题。
translated by 谷歌翻译
物联网(IoT)集成了数十亿个智能设备,这些设备可以通过最少的人为干预相互通信。它是计算史上发展最快的领域之一,到2020年底估计有500亿台设备。一方面,物联网在增强几种可提高生活质量的真实智能应用方面起着至关重要的作用。另一方面此外,物联网系统的横切性质以及涉及此类系统部署的多学科组件引入了新的安全挑战。对物联网设备及其固有漏洞实施安全措施,如加密,认证,访问控制,网络安全和应用安全是无效的。因此,应加强现有的安全方法,以有效保护物联网系统。机器学习和深度学习(ML / DL)在过去几年中取得了令人瞩目的进步,机器智能已经从实验室好奇心转变为实用机械的几个重要应用。因此,ML / DL方法对于将物联网系统的这些安全性转变为仅仅促进设备与基于安全的智能系统之间的安全通信非常重要。这项工作的目标是提供对ML / DL方法的全面调查,可用于开发物联网系统的增强安全方法。提出了与固有或新引入的威胁相关的物联网安全威胁,并讨论了各种潜在的物联网系统攻击面以及与每个表面相关的可能威胁。然后,我们彻底审查了物联网安全的ML / DL方法,并提出了每种方法的机会,优点和缺点。讨论将ML / DL应用于IoTsecurity所涉及的机遇和挑战。这些机遇和挑战可以作为潜在的未来研究方向。
translated by 谷歌翻译
声学数据提供从生物学和通信到海洋和地球科学等领域的科学和工程见解。我们调查了机器学习(ML)的进步和变革潜力,包括声学领域的深度学习。 ML是用于自动检测和利用模式印度的广泛的统计技术家族。相对于传统的声学和信号处理,ML是数据驱动的。给定足够的训练数据,ML可以发现特征之间的复杂关系。通过大量的训练数据,ML candiscover模型描述复杂的声学现象,如人类语音和混响。声学中的ML正在迅速发展,具有令人瞩目的成果和未来的重大前景。我们首先介绍ML,然后在五个声学研究领域强调MLdevelopments:语音处理中的源定位,海洋声学中的源定位,生物声学,地震探测和日常场景中的环境声音。
translated by 谷歌翻译
Automatic Speech Recognition (ASR) has historically been a driving force behind many machine learning (ML) techniques, including the ubiquitously used hidden Markov model, discriminative learning, structured sequence learning, Bayesian learning, and adaptive learning. Moreover, ML can and occasionally does use ASR as a large-scale, realistic application to rigorously test the effectiveness of a given technique, and to inspire new problems arising from the inherently sequential and dynamic nature of speech. On the other hand, even though ASR is available commercially for some applications, it is largely an unsolved problem-for almost all applications, the performance of ASR is not on par with human performance. New insight from modern ML methodology shows great promise to advance the state-of-the-art in ASR technology. This overview article provides readers with an overview of modern ML techniques as utilized in the current and as relevant to future ASR research and systems. The intent is to foster further cross-pollination between the ML and ASR communities than has occurred in the past. The article is organized according to the major ML paradigms that are either popular already or have potential for making significant contributions to ASR technology. The paradigms presented and elaborated in this overview include: generative and discriminative learning; supervised, unsupervised, semi-supervised, and active learning; adaptive and multi-task learning; and Bayesian learning. These learning paradigms are motivated and discussed in the context of ASR technology and applications. We finally present and analyze recent developments of deep learning and learning with sparse representations, focusing on their direct relevance to advancing ASR technology.
translated by 谷歌翻译
The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations.
translated by 谷歌翻译
In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely Deep Learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.
translated by 谷歌翻译
Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future. Index Terms-Machine learning, machine intelligence, artificial neural network, deep learning, deep belief system, network traffic control, routing.
translated by 谷歌翻译
We are honored to welcome you to the 2nd International Workshop on Advanced Analyt-ics and Learning on Temporal Data (AALTD), which is held in Riva del Garda, Italy, on September 19th, 2016, co-located with The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2016). The aim of this workshop is to bring together researchers and experts in machine learning, data mining, pattern analysis and statistics to share their challenging issues and advance researches on temporal data analysis. Analysis and learning from temporal data cover a wide scope of tasks including learning metrics, learning representations, unsupervised feature extraction, clustering and classification. This volume contains the conference program, an abstract of the invited keynotes and the set of regular papers accepted to be presented at the conference. Each of the submitted papers was reviewed by at least two independent reviewers, leading to the selection of eleven papers accepted for presentation and inclusion into the program and these proceedings. The contributions are given by the alphabetical order, by surname. The keynote given by Marco Cuturi on "Regularized DTW Divergences for Time Se-ries" focuses on the definition of alignment kernels for time series that can later be used at the core of standard machine learning algorithms. The one given by Tony Bagnall on "The Great Time Series Classification Bake Off" presents an important attempt to experimentally compare performance of a wide range of time series classifiers, together with ensemble classifiers that aim at combining existing classifiers to improve classification quality. Accepted papers spanned from innovative ideas on analytic of temporal data, including promising new approaches and covering both practical and theoretical issues. We wish to thank the ECML PKDD council members for giving us the opportunity to hold the AALTD workshop within the framework of the ECML/PKDD Conference and the members of the local organizing committee for their support. The organizers of the AALTD conference gratefully thank the financial support of the Université de Rennes 2, MODES and Universidade da Coruña. Last but not least, we wish to thank the contributing authors for the high quality works and all members of the Reviewing Committee for their invaluable assistance in the iii selection process. All of them have significantly contributed to the success of AALTD 2106. We sincerely hope that the workshop participants have a great and fruitful time at the conference.
translated by 谷歌翻译
We are honored to welcome you to the 2nd International Workshop on Advanced Analyt-ics and Learning on Temporal Data (AALTD), which is held in Riva del Garda, Italy, on September 19th, 2016, co-located with The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2016). The aim of this workshop is to bring together researchers and experts in machine learning, data mining, pattern analysis and statistics to share their challenging issues and advance researches on temporal data analysis. Analysis and learning from temporal data cover a wide scope of tasks including learning metrics, learning representations, unsupervised feature extraction, clustering and classification. This volume contains the conference program, an abstract of the invited keynotes and the set of regular papers accepted to be presented at the conference. Each of the submitted papers was reviewed by at least two independent reviewers, leading to the selection of eleven papers accepted for presentation and inclusion into the program and these proceedings. The contributions are given by the alphabetical order, by surname. The keynote given by Marco Cuturi on "Regularized DTW Divergences for Time Se-ries" focuses on the definition of alignment kernels for time series that can later be used at the core of standard machine learning algorithms. The one given by Tony Bagnall on "The Great Time Series Classification Bake Off" presents an important attempt to experimentally compare performance of a wide range of time series classifiers, together with ensemble classifiers that aim at combining existing classifiers to improve classification quality. Accepted papers spanned from innovative ideas on analytic of temporal data, including promising new approaches and covering both practical and theoretical issues. We wish to thank the ECML PKDD council members for giving us the opportunity to hold the AALTD workshop within the framework of the ECML/PKDD Conference and the members of the local organizing committee for their support. The organizers of the AALTD conference gratefully thank the financial support of the Université de Rennes 2, MODES and Universidade da Coruña. Last but not least, we wish to thank the contributing authors for the high quality works and all members of the Reviewing Committee for their invaluable assistance in the iii selection process. All of them have significantly contributed to the success of AALTD 2106. We sincerely hope that the workshop participants have a great and fruitful time at the conference.
translated by 谷歌翻译
无人驾驶飞行器(UAV)最近迅速发展,以促进广泛的创新应用,这些应用可以从根本上改变设计的道路物理系统(CPS)。 CPS是现代一代的系统,具有计算和物理潜力之间的协同合作,可以通过几种新机制与人类进行交互。在CPS应用中使用无人机的主要优势在于其独特的功能,包括其移动性,动态性,轻松部署,自适应高度,灵活性,可调节性以及随时随地有效评估现实世界的功能。此外,从技术角度来看,预计无人机将成为先进CPS发展的重要元素。因此,在本次调查中,我们的目标是确定用于CPS应用的多无人机系统的最基本和最重要的设计挑战。我们突出了关键和多功能方面,涵盖目标和基础设施对象的覆盖和跟踪,节能导航和使用机器学习的细粒度CPS应用程序的图像分析。还研究了关键原型和测试平台,以展示这些实用技术如何促进CPS应用。我们提出并提出最先进的算法,用定量和定性方法解决设计挑战,并将这些挑战与重要的CPS应用相结合,以便对每种应用的挑战得出深刻的结论。最后,我们总结了可能影响这些领域未来研究的潜在新方向和想法。
translated by 谷歌翻译
Driven by the visions of Internet of Things and 5G communications, recentyears have seen a paradigm shift in mobile computing, from the centralizedMobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature ofMEC is to push mobile computing, network control and storage to the networkedges (e.g., base stations and access points) so as to enablecomputation-intensive and latency-critical applications at the resource-limitedmobile devices. MEC promises dramatic reduction in latency and mobile energyconsumption, tackling the key challenges for materializing 5G vision. Thepromised gains of MEC have motivated extensive efforts in both academia andindustry on developing the technology. A main thrust of MEC research is toseamlessly merge the two disciplines of wireless communications and mobilecomputing, resulting in a wide-range of new designs ranging from techniques forcomputation offloading to network architectures. This paper provides acomprehensive survey of the state-of-the-art MEC research with a focus on jointradio-and-computational resource management. We also present a research outlookconsisting of a set of promising directions for MEC research, including MECsystem deployment, cache-enabled MEC, mobility management for MEC, green MEC,as well as privacy-aware MEC. Advancements in these directions will facilitatethe transformation of MEC from theory to practice. Finally, we introduce recentstandardization efforts on MEC as well as some typical MEC applicationscenarios.
translated by 谷歌翻译
The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks' openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-of-the-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs.
translated by 谷歌翻译
随着数字信息的快速发展,人和机器产生的数据量呈指数级增长。随着这种趋势,机器学习算法不断形成和发展,以发现来自不同数据源的新信息和知识。使用超级盒作为基本代表和构建块的学习算法是机器学习方法的缩写。这些算法具有巨大的潜力,可以实现高可扩展性和以这种方式构建的预测器的在线适应,以适应动态变化的环境和流数据。本文旨在对基于超级盒机器学习模型的文献进行全面的调查。通常,根据所得模型的架构和特征,现有的基于超盒的学习算法可以分为三大类:模糊最小 - 最大神经网络,基于超盒的混合模型,以及基于超盒表示的其他算法。在这些组中,本文简要描述了模型的结构,相关的学习算法,以及它们的优缺点。本文还介绍了这些基于高箱的模型在实际问题中的主要应用。最后,我们讨论了一些未解决的问题,并确定了该领域潜在的未来研究方向。
translated by 谷歌翻译
由于最近在处理速度和数据采集和存储方面的进步,机器学习(ML)正在渗透我们生活的方方面面,并从根本上改变了许多领域的研究。无线通信是另一个成功的故事 - 在我们的生活中无处不在,从手持设备到可穿戴设备,智能家居和汽车。虽然近年来在为各种无线通信问题利用ML工具方面看到了一系列研究活动,但这些技术对实际通信系统和标准的影响还有待观察。在本文中,我们回顾了无线通信系统中ML的主要承诺和挑战,主要关注物理层。我们提出了ML技术在经典方法方面取得的一些最令人瞩目的近期成就,并指出了有希望的研究方向,其中ML可能在不久的将来产生最大的影响。我们还强调了在无线网络边缘设计物理层技术以实现分布式ML的重要问题,这进一步强调了理解和连接ML与无线通信中的基本概念的需要。
translated by 谷歌翻译
translated by 谷歌翻译