In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译
Collaboration among industrial Internet of Things (IoT) devices and edge networks is essential to support computation-intensive deep neural network (DNN) inference services which require low delay and high accuracy. Sampling rate adaption which dynamically configures the sampling rates of industrial IoT devices according to network conditions, is the key in minimizing the service delay. In this paper, we investigate the collaborative DNN inference problem in industrial IoT networks. To capture the channel variation and task arrival randomness, we formulate the problem as a constrained Markov decision process (CMDP). Specifically, sampling rate adaption, inference task offloading and edge computing resource allocation are jointly considered to minimize the average service delay while guaranteeing the long-term accuracy requirements of different inference services. Since CMDP cannot be directly solved by general reinforcement learning (RL) algorithms due to the intractable long-term constraints, we first transform the CMDP into an MDP by leveraging the Lyapunov optimization technique. Then, a deep RL-based algorithm is proposed to solve the MDP. To expedite the training process, an optimization subroutine is embedded in the proposed algorithm to directly obtain the optimal edge computing resource allocation. Extensive simulation results are provided to demonstrate that the proposed RL-based algorithm can significantly reduce the average service delay while preserving long-term inference accuracy with a high probability.
translated by 谷歌翻译
A large number of studies on Graph Outlier Detection (GOD) have emerged in recent years due to its wide applications, in which Unsupervised Node Outlier Detection (UNOD) on attributed networks is an important area. UNOD focuses on detecting two kinds of typical outliers in graphs: the structural outlier and the contextual outlier. Most existing works conduct experiments based on datasets with injected outliers. However, we find that the most widely-used outlier injection approach has a serious data leakage issue. By only utilizing such data leakage, a simple approach can achieve state-of-the-art performance in detecting outliers. In addition, we observe that most existing algorithms have a performance drop with varied injection settings. The other major issue is on balanced detection performance between the two types of outliers, which has not been considered by existing studies. In this paper, we analyze the cause of the data leakage issue in depth since the injection approach is a building block to advance UNOD. Moreover, we devise a novel variance-based model to detect structural outliers, which outperforms existing algorithms significantly at different injection settings. On top of this, we propose a new framework, Variance-based Graph Outlier Detection (VGOD), which combines our variance-based model and attribute reconstruction model to detect outliers in a balanced way. Finally, we conduct extensive experiments to demonstrate the effectiveness and efficiency of VGOD. The results on 5 real-world datasets validate that VGOD achieves not only the best performance in detecting outliers but also a balanced detection performance between structural and contextual outliers. Our code is available at https://github.com/goldenNormal/vgod-github.
translated by 谷歌翻译
In this paper, we design a resource management scheme to support stateful applications, which will be prevalent in 6G networks. Different from stateless applications, stateful applications require context data while executing computing tasks from user terminals (UTs). Using a multi-tier computing paradigm with servers deployed at the core network, gateways, and base stations to support stateful applications, we aim to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from reconfiguring resource reservation. The coupling among different resources and the impact of UT mobility create challenges in resource management. To address the challenges, we develop digital twin (DT) empowered network planning with two elements, i.e., multi-resource reservation and resource reservation reconfiguration. First, DTs are designed for collecting UT status data, based on which UTs are grouped according to their mobility patterns. Second, an algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a Meta-learning-based approach is developed to reconfigure resource reservation for balancing the network resource usage and the reconfiguration cost. Simulation results demonstrate that the proposed DT-empowered network planning outperforms benchmark frameworks by using less resources and incurring lower reconfiguration costs.
translated by 谷歌翻译
旨在识别不同网络中的相应节点的网络对齐任务对许多随后的应用程序具有重要意义。不需要标记的锚点链接,无监督的对准方法吸引了越来越多的关注。但是,由现有方法定义的拓扑一致性假设通常是低阶且准确的,因为仅考虑边缘式拓扑模式,这在无监督的环境中尤其有风险。为了重新定位对齐过程从低阶到高阶拓扑一致性的重点,在本文中,我们提出了一个名为HTC的完全无监督的网络对齐框架。提出的高阶拓扑一致性是基于边缘轨道制定的,将其合并到图形卷积网络的信息聚合过程中,以便将对齐一致性转换为节点嵌入的相似性。此外,编码器经过培训为多轨了解,然后进行完善以识别更受信任的锚点链接。通过整合所有不同的一致性顺序,可以全面评估节点对应关系。 {除了合理的理论分析外,所提出方法的优越性还通过广泛的实验评估得到了经验证明。在三对现实世界数据集和两对合成数据集上,我们的HTC始终以最少或可比的时间消耗优于各种各样的无监督和监督方法。由于我们的多轨道感知训练机制,它还表现出对结构噪声的鲁棒性。
translated by 谷歌翻译
在支持计算和通信技术的支持下,元评估有望为用户带来前所未有的服务体验。但是,元用户数量的增加对网络资源的需求量很大,尤其是用于基于图形扩展现实并需要渲染大量虚拟对象的荟萃分析服务。为了有效利用网络资源并改善体验质量(QOE),我们设计了一个注意力吸引网络资源分配方案,以实现定制的元评估服务。目的是将更多的网络资源分配给用户更感兴趣的虚拟对象。我们首先讨论与荟萃服务有关的几种关键技术,包括QOE分析,眼睛跟踪和远程渲染。然后,我们查看现有的数据集,并提出用户对象注意级别(UOAL)数据集,该数据集包含30个用户对1,000张图像中96个对象的地面意义。提供有关如何使用UOAL的教程。在UOAL的帮助下,我们提出了一种注意力感知的网络资源分配算法,该算法有两个步骤,即注意力预测和QOE最大化。特别是,我们概述了两种类型的注意力预测方法的设计,即兴趣感知和时间感知预测。通过使用预测的用户对象 - 注意值,可以最佳分配边缘设备的渲染能力等网络资源以最大化QOE。最后,我们提出了与荟萃服务有关的有前途的研究指示。
translated by 谷歌翻译
随着交通预测技术的发展,时尚预测模型引起了学术界社区和工业的越来越多。然而,大多数现有的研究侧重于减少模型的预测误差,而是忽略由区域内空间事件的不均匀分布引起的错误。在本文中,我们研究了区域分区问题,即最佳网格尺寸选择问题(OGSS),其目的是通过选择最佳网格尺寸来最小化时空预测模型的真正误差。为了解决OGSS,我们通过最小化其上限来分析时空预测模型的真正误差的上限,并最大限度地减少真实误差。通过深入分析,我们发现当模型网格数量从1增加到最大允许值时,真正误差的上限将减少随后增加。然后,我们提出了两种算法,即三元搜索和迭代方法,自动找到最佳网格尺寸。最后,实验验证了预测误差是否具有与其上限相同的趋势,并且实际误差的上限相对于模型网格数量的上限的变化趋势将降低。同时,在一个情况下,通过选择最佳网格尺寸,可以提高最先进的预测算法的订单调度结果高达13.6%,这表明了我们在调整该区域上的方法的有效性用于时空预测模型的分区。
translated by 谷歌翻译
本文通过模仿人脑的学习和思维过程来提出基于语义聚类的扣除学习。人类可以根据经验和认知做出判决,结果,没有人会识别一个未知的动物作为汽车。灵感来自这种观察,我们建议使用之前的聚类培训深度学习模型,可以指导模型来学习语义的能力,从分类属性中宣传和总结,例如属于动物的猫而与车辆有关的汽车。特别是,如果图像被标记为猫,则培训模型以了解“此图像完全不是动物的异常值”。所提出的方法实现了语义空间中的高级聚类,使模型能够在学习过程中推断各种类之间的关系。此外,本文介绍了一种基于语义的基于语义的随机搜索,对相反的标签,以确保聚类的平滑分布和分类器的鲁棒性。理论上和经验通过广泛的实验支持拟议的方法。我们将跨新型分类器的性能进行比较,在流行的基准上,通过向数据集添加噪声标记来验证泛化能力。实验结果表明了所提出的方法的优越性。
translated by 谷歌翻译
认知诊断,其目标是获得学生对特定知识概念的熟练程度,是智能教育系统中的基本任务。以前的作品通常代表每个学生作为培训知识熟练程度,无法捕捉学生的概念和基本概况(例如记忆或理解)的关系。在本文中,我们提出了一种与探索知识概念和学生嵌入的分层关系的学生代表方法。具体而言,由于父母知识概念的熟练程度反映了知识概念之间的相关性,因此我们获得了第一个知识熟练掌握了父子概念投影层。此外,采用低维密度载体作为每个学生的嵌入,并获得完整的连接层的第二个知识熟练程度。然后,我们将上面的两个熟练程度传染媒介结合起来获得学生的最终代表。实验表明了所提出的代表方法的有效性。
translated by 谷歌翻译