基于拉曼扩增的物理特征,我们提出了一个基于神经网络(NN)和线性回归的三步建模方案。与基于纯NN的方法相比,通过模拟证明了更高的精度,较少的数据需求和较低的计算复杂性。
translated by 谷歌翻译
我们通过实验验证一个实时机器学习框架,能够控制拉曼放大器的泵功率值以在二维(2D)中塑造信号功率演变:频率和光纤距离。在我们的设置中,优化了四个一阶反向传输泵的功率值,以实现所需的2D功率配置文件。泵功率优化框架包括一个卷积神经网络(CNN),然后是差分进化(DE)技术,在线应用于放大器设置,以自动实现目标2D功率配置文件。可实现的2D配置文件的结果表明,该框架能够确保获得的最大绝对误差(MAE)(<0.5 dB)与获得的目标2D配置文件之间。此外,该框架在多目标设计方案中进行了测试,该方案的目标是在跨度结束时达到固定增益水平的2D配置文件,共同在整个光纤长度上进行最小的光谱游览。在这种情况下,实验结果断言,对于目标扁平增益水平的2D轮廓,当设置在泵功率值中不受物理限制时,DE获得的最大增益偏差小于1 dB。模拟结果还证明,有足够的泵功率可用,可以实现更高的目标增益水平的更好的增益偏差(小于0.6 dB)。
translated by 谷歌翻译
研究了拉曼放大器优化的问题。使用机器学习(ML)获得了拉曼增益系数的可区分插值函数,该函数允许对前向传播拉曼泵的梯度下降优化。然后,针对任意数据通道负载和跨度长度优化了向前泵送配置中任意数量的泵的频率和功率。向前倾斜的拉曼放大器的实验训练的ML模型将正向传播模型结合在一起,以共同优化前向放大器泵的频率和功率以及向后放大器泵的功率。对于250 km的未重新曝光,展示了关节向前和向后放大器的优化。超过4 THz的增益平坦度为$ <$ 1〜 dB。使用数值模拟器验证了优化的放大器。
translated by 谷歌翻译
多节点WDM网络的数字双胞胎模型是从单个访问点获得的。该模型用于预测和优化网络中每个链接的发射功率配置文件,并获得最多2.2 〜db的边距改进。不优化的传输。
translated by 谷歌翻译
在本文中,提出了一种新的方法,该方法允许基于神经网络(NN)均衡器的低复杂性发展,以缓解高速相干光学传输系统中的损伤。在这项工作中,我们提供了已应用于馈电和经常性NN设计的各种深层模型压缩方法的全面描述和比较。此外,我们评估了这些策略对每个NN均衡器的性能的影响。考虑量化,重量聚类,修剪和其他用于模型压缩的尖端策略。在这项工作中,我们提出并评估贝叶斯优化辅助压缩,其中选择了压缩的超参数以同时降低复杂性并提高性能。总之,通过使用模拟和实验数据来评估每种压缩方法的复杂性及其性能之间的权衡,以完成分析。通过利用最佳压缩方法,我们表明可以设计基于NN的均衡器,该均衡器比传统的数字背部传播(DBP)均衡器具有更好的性能,并且只有一个步骤。这是通过减少使用加权聚类和修剪算法后在NN均衡器中使用的乘数数量来完成的。此外,我们证明了基于NN的均衡器也可以实现卓越的性能,同时仍然保持与完整的电子色色散补偿块相同的复杂性。我们通过强调开放问题和现有挑战以及未来的研究方向来结束分析。
translated by 谷歌翻译
锂离子电池(LIBS)的数学建模是先进电池管理中的主要挑战。本文提出了两个新的框架,将基于机器的基于机器的模型集成,以实现LIBS的高精度建模。该框架的特征在于通知物理模型的状态信息的机器学习模型,从而实现物理和机器学习之间的深度集成。基于框架,通过将电化学模型和等效电路模型分别与前馈神经网络组合,构造了一系列混合模型。混合模型在结构中相对令人惊讶,可以在广泛的C速率下提供相当大的预测精度,如广泛的模拟和实验所示。该研究进一步扩展以进行衰老感知混合建模,导致杂交模型意识到意识到健康状态以进行预测。实验表明,该模型在整个Lib的循环寿命中具有很高的预测精度。
translated by 谷歌翻译
Through a study of multi-gas mixture datasets, we show that in multi-component spectral analysis, the number of functional or non-functional principal components required to retain the essential information is the same as the number of independent constituents in the mixture set. Due to the mutual in-dependency among different gas molecules, near one-to-one projection from the principal component to the mixture constituent can be established, leading to a significant simplification of spectral quantification. Further, with the knowledge of the molar extinction coefficients of each constituent, a complete principal component set can be extracted from the coefficients directly, and few to none training samples are required for the learning model. Compared to other approaches, the proposed methods provide fast and accurate spectral quantification solutions with a small memory size needed.
translated by 谷歌翻译
在数值天气和气候模型中的云结构的处理通常很大程度上是大大简化的,以使它们计算得起价格实惠。在这里,我们建议使用计算廉价的神经网络来纠正欧洲的中等天气预报1D辐射方案ECRAD,用于3D云效应。 3D云效应被学习为ECRAD快速1D Tripleclouds疏忽它们的差异及其3D Spartacus(通过云侧辐射传输的快速算法),其中包括它们的求解器,但大约是计算昂贵的五倍。在3D信号的20到30%之间的典型误差,神经网络的准确性提高了运行时增加约1%。因此,而不是模仿整个斯巴达斯,我们将Tripleclouds保持不变的气氛的无云部分和在其他地方的3D矫正它。如果我们假设两者的相似的信噪比,则对相对小的3D校正而不是整个信号的焦点允许显着提高预测。
translated by 谷歌翻译
在概述中,引入了通用数学对象(映射),并解释了其与模型物理参数化的关系。引入了可用于模拟和/或近似映射的机器学习(ML)工具。ML的应用在模拟现有参数化,开发新的参数化,确保物理约束和控制开发应用程序的准确性。讨论了一些允许开发人员超越标准参数化范式的ML方法。
translated by 谷歌翻译
The availability of Martian atmospheric data provided by several Martian missions broadened the opportunity to investigate and study the conditions of the Martian ionosphere. As such, ionospheric models play a crucial part in improving our understanding of ionospheric behavior in response to different spatial, temporal, and space weather conditions. This work represents an initial attempt to construct an electron density prediction model of the Martian ionosphere using machine learning. The model targets the ionosphere at solar zenith ranging from 70 to 90 degrees, and as such only utilizes observations from the Mars Global Surveyor mission. The performance of different machine learning methods was compared in terms of root mean square error, coefficient of determination, and mean absolute error. The bagged regression trees method performed best out of all the evaluated methods. Furthermore, the optimized bagged regression trees model outperformed other Martian ionosphere models from the literature (MIRI and NeMars) in finding the peak electron density value, and the peak density height in terms of root-mean-square error and mean absolute error.
translated by 谷歌翻译
In this work, we demonstrate the offline FPGA realization of both recurrent and feedforward neural network (NN)-based equalizers for nonlinearity compensation in coherent optical transmission systems. First, we present a realization pipeline showing the conversion of the models from Python libraries to the FPGA chip synthesis and implementation. Then, we review the main alternatives for the hardware implementation of nonlinear activation functions. The main results are divided into three parts: a performance comparison, an analysis of how activation functions are implemented, and a report on the complexity of the hardware. The performance in Q-factor is presented for the cases of bidirectional long-short-term memory coupled with convolutional NN (biLSTM + CNN) equalizer, CNN equalizer, and standard 1-StpS digital back-propagation (DBP) for the simulation and experiment propagation of a single channel dual-polarization (SC-DP) 16QAM at 34 GBd along 17x70km of LEAF. The biLSTM+CNN equalizer provides a similar result to DBP and a 1.7 dB Q-factor gain compared with the chromatic dispersion compensation baseline in the experimental dataset. After that, we assess the Q-factor and the impact of hardware utilization when approximating the activation functions of NN using Taylor series, piecewise linear, and look-up table (LUT) approximations. We also show how to mitigate the approximation errors with extra training and provide some insights into possible gradient problems in the LUT approximation. Finally, to evaluate the complexity of hardware implementation to achieve 400G throughput, fixed-point NN-based equalizers with approximated activation functions are developed and implemented in an FPGA.
translated by 谷歌翻译
Tomographic SAR technique has attracted remarkable interest for its ability of three-dimensional resolving along the elevation direction via a stack of SAR images collected from different cross-track angles. The emerged compressed sensing (CS)-based algorithms have been introduced into TomoSAR considering its super-resolution ability with limited samples. However, the conventional CS-based methods suffer from several drawbacks, including weak noise resistance, high computational complexity, and complex parameter fine-tuning. Aiming at efficient TomoSAR imaging, this paper proposes a novel efficient sparse unfolding network based on the analytic learned iterative shrinkage thresholding algorithm (ALISTA) architecture with adaptive threshold, named Adaptive Threshold ALISTA-based Sparse Imaging Network (ATASI-Net). The weight matrix in each layer of ATASI-Net is pre-computed as the solution of an off-line optimization problem, leaving only two scalar parameters to be learned from data, which significantly simplifies the training stage. In addition, adaptive threshold is introduced for each azimuth-range pixel, enabling the threshold shrinkage to be not only layer-varied but also element-wise. Moreover, the final learned thresholds can be visualized and combined with the SAR image semantics for mutual feedback. Finally, extensive experiments on simulated and real data are carried out to demonstrate the effectiveness and efficiency of the proposed method.
translated by 谷歌翻译
We demonstrate transfer learning-assisted neural network models for optical matrix multipliers with scarce measurement data. Our approach uses <10\% of experimental data needed for best performance and outperforms analytical models for a Mach-Zehnder interferometer mesh.
translated by 谷歌翻译
OPF问题是为电力系统操作而制定和解决的,尤其是用于实时确定生成调度点。对于具有大量变量和约束的大型功率系统网络,以及时找到实时OPF的最佳解决方案需要大量的计算能力。本文提出了一种使用图神经网络(GNN)减少原始OPF问题中约束数量的新方法。 GNN是一种创新的机器学习模型,它利用从节点,边缘和网络拓扑的功能来最大程度地提高其性能。在本文中,我们提出了一个GNN模型,以预测哪种线将大量负载或充满给定的负载曲线和发电能力。仅在OPF问题中监视这些关键行,从而造成降低的OPF(ROPF)问题。预期从提出的ROPF模型中预计计算时间大量节省。还对GNN模型的预测进行了全面分析。结论是,GNN在ROPF中的应用能够减少计算时间,同时保留溶液质量。
translated by 谷歌翻译
A methodology is proposed, which addresses the caveat that line-of-sight emission spectroscopy presents in that it cannot provide spatially resolved temperature measurements in nonhomogeneous temperature fields. The aim of this research is to explore the use of data-driven models in measuring temperature distributions in a spatially resolved manner using emission spectroscopy data. Two categories of data-driven methods are analyzed: (i) Feature engineering and classical machine learning algorithms, and (ii) end-to-end convolutional neural networks (CNN). In total, combinations of fifteen feature groups and fifteen classical machine learning models, and eleven CNN models are considered and their performances explored. The results indicate that the combination of feature engineering and machine learning provides better performance than the direct use of CNN. Notably, feature engineering which is comprised of physics-guided transformation, signal representation-based feature extraction and Principal Component Analysis is found to be the most effective. Moreover, it is shown that when using the extracted features, the ensemble-based, light blender learning model offers the best performance with RMSE, RE, RRMSE and R values of 64.3, 0.017, 0.025 and 0.994, respectively. The proposed method, based on feature engineering and the light blender model, is capable of measuring nonuniform temperature distributions from low-resolution spectra, even when the species concentration distribution in the gas mixtures is unknown.
translated by 谷歌翻译
Estimating the state of charge (SOC) of compound energy storage devices in the hybrid energy storage system (HESS) of electric vehicles (EVs) is vital in improving the performance of the EV. The complex and variable charging and discharging current of EVs makes an accurate SOC estimation a challenge. This paper proposes a novel deep learning-based SOC estimation method for lithium-ion battery-supercapacitor HESS EV based on the nonlinear autoregressive with exogenous inputs neural network (NARXNN). The NARXNN is utilized to capture and overcome the complex nonlinear behaviors of lithium-ion batteries and supercapacitors in EVs. The results show that the proposed method improved the SOC estimation accuracy by 91.5% on average with error values below 0.1% and reduced consumption time by 11.4%. Hence validating both the effectiveness and robustness of the proposed method.
translated by 谷歌翻译
在这项工作中,我们介绍,证明并展示了纠正源期限方法(Costa) - 一种新的混合分析和建模(火腿)的新方法。 HAM的目标是将基于物理的建模(PBM)和数据驱动的建模(DDM)组合,以创建概括,值得信赖,准确,计算高效和自我不断发展的模型。 Costa通过使用深神经网络产生的纠正源期限增强PBM模型的控制方程来实现这一目标。在一系列关于一维热扩散的数值实验中,发现CostA在精度方面优于相当的DDM和PBM模型 - 通常通过几个数量级降低预测误差 - 同时也比纯DDM更好地概括。由于其灵活而稳定的理论基础,Costa提供了一种模块化框架,用于利用PBM和DDM中的新颖开发。其理论基础还确保了哥斯达队可以用来模拟由(确定性)部分微分方程所控制的任何系统。此外,Costa有助于在PBM的背景下解释DNN生成的源术语,这导致DNN的解释性改善。这些因素使哥斯达成为数据驱动技术的潜在门开启者,以进入先前为纯PBM保留的高赌注应用。
translated by 谷歌翻译
FPGA中首次实施了针对非线性补偿的经常性和前馈神经网络均衡器,其复杂度与分散均衡器的复杂度相当。我们证明,基于NN的均衡器可以胜过1个速度的DBP。
translated by 谷歌翻译
由于其低复杂性和鲁棒性,机器学习(ML)吸引了对物理层设计问题的巨大研究兴趣,例如信道估计。通道估计通过ML需要在数据集上进行模型训练,该数据集通常包括作为输入和信道数据的接收的导频信号作为输出。在以前的作品中,模型培训主要通过集中式学习(CL)进行,其中整个训练数据集从基站(BS)的用户收集。这种方法引入了数据收集的巨大通信开销。在本文中,为了解决这一挑战,我们提出了一种用于频道估计的联邦学习(FL)框架。我们设计在用户的本地数据集上培训的卷积神经网络(CNN),而不将它们发送到BS。我们为常规和RIS(智能反射表面)开发了基于流的信道估计方案,辅助大规模MIMO(多输入多输出)系统,其中单个CNN为两种情况训练了两个不同的数据集。我们评估噪声和量化模型传输的性能,并表明所提出的方法提供大约16倍的开销比CL,同时保持令人满意的性能接近CL。此外,所提出的架构表现出比最先进的ML的估计误差较低。
translated by 谷歌翻译
激光加工是一种高度灵活的非接触式制造技术,在学术界和行业中广泛使用。由于光和物质之间的非线性相互作用,模拟方法非常重要,因为它们通过理解激光处理参数之间的相互关系来帮助增强加工质量。另一方面,实验处理参数优化建议对可用处理参数空间进行系统且耗时的研究。一种智能策略是采用机器学习(ML)技术来捕获Picsecond激光加工参数之间的关系,以找到适当的参数组合,以创建对工业级氧化铝陶瓷的所需削减,并具有深层,平滑和无缺陷的模式。激光参数,例如梁振幅和频率,扫描仪的传递速度以及扫描仪与样品表面的垂直距离的速度,用于预测深度,最高宽度和底部宽度使用ML型号雕刻通道。由于激光参数之间的复杂相关性,因此表明神经网络(NN)是预测输出最有效的。配备了ML模型,该模型可以捕获激光参数与雕刻通道尺寸之间的互连,可以预测所需的输入参数以实现目标通道几何形状。该策略大大降低了开发阶段实验激光加工的成本和精力,而不会损害准确性或性能。开发的技术可以应用于各种陶瓷激光加工过程。
translated by 谷歌翻译