物联网(IoT)集成了数十亿个智能设备,这些设备可以通过最少的人为干预相互通信。它是计算史上发展最快的领域之一,到2020年底估计有500亿台设备。一方面,物联网在增强几种可提高生活质量的真实智能应用方面起着至关重要的作用。另一方面此外,物联网系统的横切性质以及涉及此类系统部署的多学科组件引入了新的安全挑战。对物联网设备及其固有漏洞实施安全措施,如加密,认证,访问控制,网络安全和应用安全是无效的。因此,应加强现有的安全方法,以有效保护物联网系统。机器学习和深度学习(ML / DL)在过去几年中取得了令人瞩目的进步,机器智能已经从实验室好奇心转变为实用机械的几个重要应用。因此,ML / DL方法对于将物联网系统的这些安全性转变为仅仅促进设备与基于安全的智能系统之间的安全通信非常重要。这项工作的目标是提供对ML / DL方法的全面调查,可用于开发物联网系统的增强安全方法。提出了与固有或新引入的威胁相关的物联网安全威胁,并讨论了各种潜在的物联网系统攻击面以及与每个表面相关的可能威胁。然后,我们彻底审查了物联网安全的ML / DL方法,并提出了每种方法的机会,优点和缺点。讨论将ML / DL应用于IoTsecurity所涉及的机遇和挑战。这些机遇和挑战可以作为潜在的未来研究方向。
translated by 谷歌翻译
Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future. Index Terms-Machine learning, machine intelligence, artificial neural network, deep learning, deep belief system, network traffic control, routing.
translated by 谷歌翻译
This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols, and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services.
translated by 谷歌翻译
智能城市的发展及其快速部署导致以前所未有的速度生成大量数据。遗憾的是,大多数生成的数据被浪费而没有提取潜在有用的信息和知识,因为缺乏既定的机制和标准,此外,智能城市的高动态性质要求新一代机器学习方法灵活且适应性强,能够应对数据的动态性,以便执行分析并从实时数据中学习。在本文中,我们从机器学习的角度阐述了利用智能城市的大数据所面临的挑战。特别是,我们存在浪费未标记数据的现象。我们认为,半智能监管是智慧城市应对这一挑战的必要条件。我们还为智能城市提出了一个三级学习框架,该框架与智能城市生成的大数据的层次性质相匹配,目标是提供不同级别的知识抽象。拟议的框架可以满足智能城市服务的需求。从根本上说,该框架得益于半监督的深度强化学习,其中少量具有用户反馈的数据用作标记数据,而较大量的没有这种用户反馈的数据用作未标记数据。本文还探讨了强化学习及其向半监督的转变如何通过提供跨越智能城市不同领域的若干用例来处理智能城市服务的认知方面并提高其绩效。我们还强调了几个挑战以及将机器学习和高级智能融入智能城市服务的未来研究方向。
translated by 谷歌翻译
Big data: A survey
分类:
translated by 谷歌翻译
Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community.
translated by 谷歌翻译
Cyber-Physical Systems (CPSs) represent systems where computations are tightly coupled with the physical world, meaning that physical data is the core component that drives computation. Industrial automation systems, wireless sensor networks, mobile robots and vehicular networks are just a sample of cyber-physical systems. Typically, CPSs have limited computation and storage capabilities due to their tiny size and being embedded into larger systems. With the emergence of cloud computing and the Internet-of-Things (IoT), there are several new opportunities for these CPSs to extend their capabilities by taking advantage of the cloud resources in different ways. In this survey paper, we present an overview of research effort s on the integration of cyber-physical systems with cloud computing and categorize them into three areas: (1) remote brain, (2) big data manipulation, (3) and virtualization. In particular, we focus on three major CPSs namely mobile robots, wireless sensor networks and vehicular networks.
translated by 谷歌翻译
深度神经网络(DNN)目前广泛用于许多人工智能(AI)应用,包括计算机视觉,语音识别和机器人技术。虽然DNN在许多AI任务上提供最先进的准确性,但却以高计算复杂性为代价。因此,技术要求能够有效地处理DNN以提高能量效率和吞吐量,而不会牺牲应用精度或增加对AI系统中DNN的广泛部署至关重要的硬件成本。本文旨在提供有关实现DNN高效处理目标的最新进展的综合指南和调查。具体而言,它将提供DNN的概述,讨论支持DNN的各种硬件平台和体系结构,并突出降低计算成本的关键趋势通过联合硬件设计和DNN算法变化,仅通过硬件设计变更或DNN。它还将总结各种开发资源,使研究人员和从业人员能够快速开始这一领域,并突出重要的基准测量指标和设计考虑因素,用于评估快速增长的DNN硬件设计数量,可选择包括算法设计,在学术界和行业。读者将从本文中删除以下概念:了解DNN的关键设计注意事项;能够使用基准和比较指标评估不同的DNN硬件实现;了解各种硬件架构和平台之间的权衡;能够评估各种DNN设计技术在高效处理中的实用性;并了解最近的实施趋势和机会。
translated by 谷歌翻译
As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.
translated by 谷歌翻译
软件定义网络(SDN)代表了一种有前途的网络架构,它结合了中央管理和网络可编程性。 SDN从数据平面分离控制平面,并将网络管理移动到称为控制器的中心点,该中心点可以被​​编程并用作网络的大脑。最近,研究界越来越倾向于从人工智能(AI)领域的最新进展中受益,以提供SDN中的学习能力和更好的决策。在本研究中,我们详细介绍了将人工智能纳入SDN的努力。我们的研究表明,研究工作主要集中在AI的三个主要子领域:机器学习,元启发式和模糊推理系统。因此,在这项工作中,我们研究了它们的不同应用领域和潜在用途,以及通过在SDN范例中包含基于AI的技术所实现的改进。
translated by 谷歌翻译
无人驾驶飞行器(UAV)最近迅速发展,以促进广泛的创新应用,这些应用可以从根本上改变设计的道路物理系统(CPS)。 CPS是现代一代的系统,具有计算和物理潜力之间的协同合作,可以通过几种新机制与人类进行交互。在CPS应用中使用无人机的主要优势在于其独特的功能,包括其移动性,动态性,轻松部署,自适应高度,灵活性,可调节性以及随时随地有效评估现实世界的功能。此外,从技术角度来看,预计无人机将成为先进CPS发展的重要元素。因此,在本次调查中,我们的目标是确定用于CPS应用的多无人机系统的最基本和最重要的设计挑战。我们突出了关键和多功能方面,涵盖目标和基础设施对象的覆盖和跟踪,节能导航和使用机器学习的细粒度CPS应用程序的图像分析。还研究了关键原型和测试平台,以展示这些实用技术如何促进CPS应用。我们提出并提出最先进的算法,用定量和定性方法解决设计挑战,并将这些挑战与重要的CPS应用相结合,以便对每种应用的挑战得出深刻的结论。最后,我们总结了可能影响这些领域未来研究的潜在新方向和想法。
translated by 谷歌翻译
Driven by the visions of Internet of Things and 5G communications, recentyears have seen a paradigm shift in mobile computing, from the centralizedMobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature ofMEC is to push mobile computing, network control and storage to the networkedges (e.g., base stations and access points) so as to enablecomputation-intensive and latency-critical applications at the resource-limitedmobile devices. MEC promises dramatic reduction in latency and mobile energyconsumption, tackling the key challenges for materializing 5G vision. Thepromised gains of MEC have motivated extensive efforts in both academia andindustry on developing the technology. A main thrust of MEC research is toseamlessly merge the two disciplines of wireless communications and mobilecomputing, resulting in a wide-range of new designs ranging from techniques forcomputation offloading to network architectures. This paper provides acomprehensive survey of the state-of-the-art MEC research with a focus on jointradio-and-computational resource management. We also present a research outlookconsisting of a set of promising directions for MEC research, including MECsystem deployment, cache-enabled MEC, mobility management for MEC, green MEC,as well as privacy-aware MEC. Advancements in these directions will facilitatethe transformation of MEC from theory to practice. Finally, we introduce recentstandardization efforts on MEC as well as some typical MEC applicationscenarios.
translated by 谷歌翻译
人工智能(AI)有机会彻底改变美国国防部(DoD)和情报界(IC)应对不断演变的威胁,数据泛滥和快速行动过程的挑战。开发端到端的人工智能系统需要并行开发不同的部分,这些部分必须协同工作才能提供决策者,作战人员和分析师可以使用的能力。这些部分包括数据收集,数据调节,算法,计算,强大的人工智能和人机组合。虽然今天流行的媒体围绕着算法和计算的进步,但大多数现代人工智能系统都利用了许多不同领域的进步。而且,虽然某些组件可能不像其他组件那样对最终用户可见,但我们的经验表明,这些组件中的每一个都是相互关联的。组件在AI系统的成功或失败中起着重要作用。本文旨在重点介绍端到端AI系统中涉及的许多这些技术。本文的目的是为读者提供学术界,行业界和政府的学术,技术细节和最新亮点的概述。在可能的情况下,我们会指出可用于进一步阅读和理解的相关资源。
translated by 谷歌翻译
本报告描述了18个项目,这些项目探讨了如何在国家实验室中将商业云计算服务用于科学计算。这些演示包括在云环境中部署专有软件,以利用已建立的基于云的分析工作流来处理科学数据集。总的来说,这些项目非常成功,并且他们共同认为云计算可以成为国家实验室科学计算的宝贵计算资源。
translated by 谷歌翻译
The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations.
translated by 谷歌翻译
由于数字技术的最新进展和可信数据的可用性,人工智能,深度学习领域已经出现,并且已经证明了其解决复杂学习问题的能力和有效性。特别是,卷积神经网络(CNN)已经证明了它们在图像检测和识别应用中的有效性。但是,它们需要密集的CPU操作和内存带宽,这使得通用CPU无法达到所需的性能水平。因此,使用了专用集成电路(ASIC),现场可编程门阵列(FPGA)和图形处理单元(GPU)的硬件加速器提高CNN的吞吐量更确切地说,FPGA最近被采用来加速深度学习网络的实现,因为它们能够最大化并行性以及由于它们的能量效率。在本文中,我们回顾了现有的加速FPGA深度学习网络的技术。我们强调了各种技术所采用的关键特性,以提高加速性能。此外,我们还提供了有关增强FPGA用于CNN加速的利用率的建议。本文研究的技术代表了基于FPGA的加速学习网络加速器的最新趋势。因此,本次审查预计将指导未来有效的硬件加速器的发展,并对深度学习研究人员有用。
translated by 谷歌翻译
Since the proposal of a fast learning algorithm for deep belief networks in 2006, the deep learning techniques have drawn ever-increasing research interests because of their inherent capability of overcoming the drawback of traditional algorithms dependent on hand-designed features. Deep learning approaches have also been found to be suitable for big data analysis with successful applications to computer vision, pattern recognition, speech recognition, natural language processing, and recommendation systems. In this paper, we discuss some widely-used deep learning architectures and their practical applications. An up-to-date overview is provided on four deep learning architectures, namely, autoencoder, convolutional neural network, deep belief network, and restricted Boltzmann machine. Different types of deep neural networks are surveyed and recent progresses are summarized. Applications of deep learning techniques on some selected areas (speech recognition, pattern recognition and computer vision) are highlighted. A list of future research topics are finally given with clear justifications.
translated by 谷歌翻译
本文介绍了深度增强学习在通信和网络中应用的综合文献综述。现代网络,例如物联网(IoT)和无人驾驶飞行器(UAV)网络,变得更加分散和自主。在这样的网络中,网络实体需要在本地制定决策以在网络环境的不确定性下最大化网络性能。强化学习已被有效地用于使网络实体能够获得最优策略,包括例如决策,当状态和动作空间很小时给定它们的状态。然而,在复杂和大规模网络中,状态和动作空间通常是大,加强学习可能无法在合理的时间内找到最优政策。因此,深入强化学习,强化学习与深度学习的结合,已经发展到克服缺点。在本次调查中,我们首先提供从基本概念到高级模型的深度强化学习教程。然后,提出深入的强化学习方法,以解决通信和网络中的新兴问题。问题包括动态网络访问,数据速率控制,无线缓存,数据卸载,网络安全和连接保存等,这对下一代网络(如5G及更高版本)都非常重要。此外,我们还介绍了深度增强学习在流量路由,资源共享和数据收集方面的应用。最后,我们重点介绍了应用深层强化学习的重要挑战,开放性问题和未来研究方向。
translated by 谷歌翻译
深度学习(DL)在最近的过去取得了巨大的成功,在图像识别和自然语言处理等各个领域引领了最先进的成果。这种成功的原因之一是DL模型的尺寸越来越大,并且可以获得大量的训练数据。为了不断提高DL的性能,增加DL系统的可扩展性是必要的。在本次调查中,我们对可扩展DLon分布式基础架构的挑战,技术和工具进行了广泛而彻底的调查。这包括用于DL的基础设施,用于并行DL训练的方法,多租户资源调度以及训练和模型数据的管理。此外,我们分析和比较了11个当前的开源DL框架和工具,并研究了哪些技术在实践中通常得到实施。最后,我们重点介绍DL系统的未来研究趋势,值得进一步研究。
translated by 谷歌翻译
在这里,我们回顾了利用大数据和机器学习(ML)的前沿研究和创新方面,这两个计算机科学领域结合起来产生机器智能。 ML可以加速解决复杂的化学问题,甚至可以解决其他方面无法解决的问题。但ML的潜在好处是以大数据生产为代价的;也就是说,为了学习,算法需要来自不同来源的大量数据,来自材料属性传感器数据。在调查中,我们提出了未来发展的路线图,重点是材料发现和化学传感,并在物联网(IoT)的背景下,这两个领域都是MLin大数据背景的突出研究领域。除了概述最近的发展之外,我们还详细阐述了bigdata和ML应用于化学,概述过程,讨论陷阱以及回顾成功和失败案例的概念和实践限制。
translated by 谷歌翻译