机器学习中的一个开放问题之一是,是否有VC-Dimension $ d $的任何设置家庭均承认尺寸〜$ O(d)$的样本压缩方案。在本文中,我们研究了图中的球。对于任意半径$ r $的球,我们设计了适当的样品压缩方案$ 2 $ $ 2 $的树木的尺寸$ 3 $ $ 3 $,尺寸为$ 4 $的间隔图,尺寸$ 6 $ 6 $的循环树木和22美元$用于无立方的中位图。对于给定半径的球,我们设计了适当的标记的样品压缩方案,树木的尺寸为$ 2 $,间隔图的尺寸为$ 4 $。我们还设计了$ \ delta $ - 液压图的球的大小2的近似样品压缩方案。
translated by 谷歌翻译
Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D physical world images into 3D virtual world scenes is computationally intensive and requires computation offloading. The disparity in transmitted scene dimension (2D as opposed to 3D) leads to asymmetric data sizes in uplink (UL) and downlink (DL). To ensure the reliability and low latency of the system, we consider an asynchronous joint UL-DL scenario where in the UL stage, the smaller data size of the physical world scenes captured by multiple extended reality users (XUs) will be uploaded to the Metaverse Console (MC) to be construed and rendered. In the DL stage, the larger-size 3D virtual world scenes need to be transmitted back to the XUs. The decisions pertaining to computation offloading and channel assignment are optimized in the UL stage, and the MC will optimize power allocation for users assigned with a channel in the UL transmission stage. Some problems arise therefrom: (i) interactive multi-process chain, specifically Asynchronous Markov Decision Process (AMDP), (ii) joint optimization in multiple processes, and (iii) high-dimensional objective functions, or hybrid reward scenarios. To ensure the reliability and low latency of the system, we design a novel multi-agent reinforcement learning algorithm structure, namely Asynchronous Actors Hybrid Critic (AAHC). Extensive experiments demonstrate that compared to proposed baselines, AAHC obtains better solutions with preferable training time.
translated by 谷歌翻译
Echo State Networks (ESN) are a type of Recurrent Neural Networks that yields promising results in representing time series and nonlinear dynamic systems. Although they are equipped with a very efficient training procedure, Reservoir Computing strategies, such as the ESN, require the use of high order networks, i.e. large number of layers, resulting in number of states that is magnitudes higher than the number of model inputs and outputs. This not only makes the computation of a time step more costly, but also may pose robustness issues when applying ESNs to problems such as Model Predictive Control (MPC) and other optimal control problems. One such way to circumvent this is through Model Order Reduction strategies such as the Proper Orthogonal Decomposition (POD) and its variants (POD-DEIM), whereby we find an equivalent lower order representation to an already trained high dimension ESN. The objective of this work is to investigate and analyze the performance of POD methods in Echo State Networks, evaluating their effectiveness. To this end, we evaluate the Memory Capacity (MC) of the POD-reduced network in comparison to the original (full order) ENS. We also perform experiments on two different numerical case studies: a NARMA10 difference equation and an oil platform containing two wells and one riser. The results show that there is little loss of performance comparing the original ESN to a POD-reduced counterpart, and also that the performance of a POD-reduced ESN tend to be superior to a normal ESN of the same size. Also we attain speedups of around $80\%$ in comparison to the original ESN.
translated by 谷歌翻译
This work demonstrates the ability to produce readily interpretable statistical metrics for model fit, fixed effects covariance coefficients, and prediction confidence. Importantly, this work compares 4 suitable and commonly applied epistemic UQ approaches, BNN, SWAG, MC dropout, and ensemble approaches in their ability to calculate these statistical metrics for the ARMED MEDL models. In our experiment for AD prognosis, not only do the UQ methods provide these benefits, but several UQ methods maintain the high performance of the original ARMED method, some even provide a modest (but not statistically significant) performance improvement. The ensemble models, especially the ensemble method with a 90% subsampling, performed well across all metrics we tested with (1) high performance that was comparable to the non-UQ ARMED model, (2) properly deweights the confounds probes and assigns them statistically insignificant p-values, (3) attains relatively high calibration of the output prediction confidence. Based on the results, the ensemble approaches, especially with a subsampling of 90%, provided the best all-round performance for prediction and uncertainty estimation, and achieved our goals to provide statistical significance for model fit, statistical significance covariate coefficients, and confidence in prediction, while maintaining the baseline performance of MEDL using ARMED
translated by 谷歌翻译
Recently, there has been a significant amount of interest in satellite telemetry anomaly detection (AD) using neural networks (NN). For AD purposes, the current approaches focus on either forecasting or reconstruction of the time series, and they cannot measure the level of reliability or the probability of correct detection. Although the Bayesian neural network (BNN)-based approaches are well known for time series uncertainty estimation, they are computationally intractable. In this paper, we present a tractable approximation for BNN based on the Monte Carlo (MC) dropout method for capturing the uncertainty in the satellite telemetry time series, without sacrificing accuracy. For time series forecasting, we employ an NN, which consists of several Long Short-Term Memory (LSTM) layers followed by various dense layers. We employ the MC dropout inside each LSTM layer and before the dense layers for uncertainty estimation. With the proposed uncertainty region and by utilizing a post-processing filter, we can effectively capture the anomaly points. Numerical results show that our proposed time series AD approach outperforms the existing methods from both prediction accuracy and AD perspectives.
translated by 谷歌翻译
Only increasing accuracy without considering uncertainty may negatively impact Deep Neural Network (DNN) decision-making and decrease its reliability. This paper proposes five combined preprocessing and post-processing methods for time-series binary classification problems that simultaneously increase the accuracy and reliability of DNN outputs applied in a 5G UAV security dataset. These techniques use DNN outputs as input parameters and process them in different ways. Two methods use a well-known Machine Learning (ML) algorithm as a complement, and the other three use only confidence values that the DNN estimates. We compare seven different metrics, such as the Expected Calibration Error (ECE), Maximum Calibration Error (MCE), Mean Confidence (MC), Mean Accuracy (MA), Normalized Negative Log Likelihood (NLL), Brier Score Loss (BSL), and Reliability Score (RS) and the tradeoffs between them to evaluate the proposed hybrid algorithms. First, we show that the eXtreme Gradient Boosting (XGB) classifier might not be reliable for binary classification under the conditions this work presents. Second, we demonstrate that at least one of the potential methods can achieve better results than the classification in the DNN softmax layer. Finally, we show that the prospective methods may improve accuracy and reliability with better uncertainty calibration based on the assumption that the RS determines the difference between MC and MA metrics, and this difference should be zero to increase reliability. For example, Method 3 presents the best RS of 0.65 even when compared to the XGB classifier, which achieves RS of 7.22.
translated by 谷歌翻译
重要性采样(IS)是一种强大的蒙特卡洛(MC)方法,用于近似积分,例如在贝叶斯推论的背景下。在IS中,从所谓的提案分布中模拟样品,并且该提案的选择是实现高性能的关键。在自适应IS(AIS)方法中,一组建议是迭代改进的。 AIS是一种相关和及时的方法论,尽管仍有许多局限性尚待克服,例如,高维和多模式问题的维度诅咒。此外,汉密尔顿蒙特卡洛(HMC)算法在机器学习和统计数据中变得越来越流行。 HMC具有几个吸引人的特征,例如其探索性行为,尤其是在其他方法遭受的情况下,尤其是在高维目标中。在本文中,我们介绍了新型的汉密尔顿自适应重要性采样(HAIS)方法。 Hais使用平行的HMC链实现了两步自适应过程,每次迭代都合作。拟议的HAI有效地适应了一系列建议,从而提取了HMC的优势。 HAI可以理解为具有额外重采样步骤的通用分层AIS家族的特定实例。 HAIS在高维问题W.R.T.方面取得了重大的绩效提高。最先进的算法。我们讨论了HAI的统计特性,并在两个具有挑战性的例子中显示了其高性能。
translated by 谷歌翻译
联合学习(FL)已成为解决数据筒仓问题的实用解决方案,而不会损害用户隐私。它的一种变体垂直联合学习(VFL)最近引起了人们的关注,因为VFL与企业对利用更有价值的功能的需求相匹配,以构建更好的机器学习模型,同时保留用户隐私。当前在VFL中的工作集中于为特定VFL算法开发特定的保护或攻击机制。在这项工作中,我们提出了一个评估框架,该框架提出了隐私 - 私人评估问题。然后,我们将此框架作为指南,以全面评估针对三种广泛依据的VFL算法的大多数最先进的隐私攻击的广泛保护机制。这些评估可以帮助FL从业人员在特定要求下选择适当的保护机制。我们的评估结果表明:模型反转和大多数标签推理攻击可能会因现有保护机制而挫败;很难防止模型完成(MC)攻击,这需要更高级的MC靶向保护机制。根据我们的评估结果,我们为提高VFL系统的隐私保护能力提供具体建议。
translated by 谷歌翻译
使用深钢筋学习(DRL)开发了一种可以在一次尝试时生成最佳网格的网格生成方法。与传统方法不同,在该方法中,用户必须指定网格划分参数或从头开始对新给出的几何形状进行优化,开发的方法采用基于DRL的多条件(MC)优化来定义各种几何形状的网格划分参数。该方法涉及以下步骤:(1)开发用于结构化刀片段的基础算法;(2)制定MC优化问题,以优化开发基础算法时引入的网格划分参数;(3)通过使用DRL解决MC优化问题来开发基于DRL的网格生成算法。结果,开发的算法能够在各种叶片的单个试验中成功生成最佳网格。
translated by 谷歌翻译
深度神经网络变得越来越强大,大大,并且始终需要培训更多标记的数据。但是,由于注释数据是耗时的,因此现在有必要开发在学习有限数据时显示出良好性能的系统。必须正确选择这些数据以获得仍然有效的模型。为此,系统必须能够确定应注释哪些数据以获得最佳结果。在本文中,我们提出了四个估计器来估计对象检测预测的信心。前两个基于蒙特卡洛辍学,第三个基于描述性统计,最后一个是检测器后验概率。在主动学习框架中,与随机选择图像相比,三个第一估计器在检测文档物理页面和文本线的性能方面有显着改善。我们还表明,基于描述性统计的提议估计器可以替代MC辍学,从而降低了计算成本而不会损害性能。
translated by 谷歌翻译