预先完成的操作涉及一个复杂且计算密集的优化过程,以确定发电机的承诺时间表和调度。优化过程是一个混合企业线性程序(MILP),也称为安全受限的单位承诺(SCUC)。独立的系统操作员(ISO)每天运行SCUC,并需要最先进的算法来加快流程。可以利用历史信息中的现有模式来减少SCUC模型,这可以节省大量时间。在本文中,研究了基于机器学习(ML)的分类方法,即逻辑回归,神经网络,随机森林和K-Nearest邻居,以减少SCUC模型。然后,使用可行性层(FL)和后处理技术来帮助ML,以确保高质量的解决方案。提出的方法在多个测试系统上进行了验证,即IEEE 24总线系统,IEEE-73总线系统,IEEE 118总线系统,500个总线系统和波兰2383-BUS系统。此外,使用可再生生成的改良IEEE 24总线系统,证明了随机SCUC(SSCUC)的模型降低。仿真结果证明了高训练的准确性,以确定承诺时间表,而FL和后处理确保ML预测不会导致溶液质量损失最小的可行解决方案。
translated by 谷歌翻译
OPF问题是为电力系统操作而制定和解决的,尤其是用于实时确定生成调度点。对于具有大量变量和约束的大型功率系统网络,以及时找到实时OPF的最佳解决方案需要大量的计算能力。本文提出了一种使用图神经网络(GNN)减少原始OPF问题中约束数量的新方法。 GNN是一种创新的机器学习模型,它利用从节点,边缘和网络拓扑的功能来最大程度地提高其性能。在本文中,我们提出了一个GNN模型,以预测哪种线将大量负载或充满给定的负载曲线和发电能力。仅在OPF问题中监视这些关键行,从而造成降低的OPF(ROPF)问题。预期从提出的ROPF模型中预计计算时间大量节省。还对GNN模型的预测进行了全面分析。结论是,GNN在ROPF中的应用能够减少计算时间,同时保留溶液质量。
translated by 谷歌翻译
电池储能系统(BES)可以有效地减轻可变生成的不确定性。降解是不可预防的,难以建模,并且可以预测诸如最受欢迎的锂离子电池(LIB)等电池。在本文中,我们提出了一种数据驱动的方法,以预测给定的预定电池操作专业文件的蝙蝠降解。特别是,提出了基于神经网络的电池降解(NNBD)模型,以用主要电池降解因子的输入来量化电池降解。当将拟议的NNBD模型限制为微电网日期调度(MDS)时,我们可以建立基于电池降解的MDS(BDMDS)模型,该模型可以考虑在拟议的基于循环的电池用途(CBUP)(CBUP)(CBUP)(CBUP)的情况下准确地考虑等效的电池降解成本NNBD模型的方法。由于所提出的NNBD模型是高度非线性的,因此BDMD很难解决。为了解决这个问题,本文提出了一个神经网络和优化解耦启发式(NNODH)算法,以有效解决此神经网络嵌入式优化问题。仿真结果表明,所提出的NNODH算法能够以最低的总成本(包括正常运行成本和电池降解成本)遵守最佳解决方案。
translated by 谷歌翻译
电价是影响所有市场参与者决策的关键因素。准确的电价预测非常重要,并且由于各种因素,电价高度挥发性,电价也非常具有挑战性。本文提出了一项综合的长期经常性卷积网络(ILRCN)模型,以预测考虑到市场价格的大多数贡献属性的电力价格。所提出的ILRCN模型将卷积神经网络和长短期记忆(LSTM)算法的功能与所提出的新颖的条件纠错项相结合。组合的ILRCN模型可以识别输入数据内的线性和非线性行为。我们使用鄂尔顿批发市场价格数据以及负载型材,温度和其他因素来说明所提出的模型。使用平均绝对误差和准确性等性能/评估度量来验证所提出的ILRCN电价预测模型的性能。案例研究表明,与支持向量机(SVM)模型,完全连接的神经网络模型,LSTM模型和LRCN模型,所提出的ILRCN模型在电价预测中是准确和有效的电力价格预测。
translated by 谷歌翻译
功率流分析用于评估电力系统网络中的电流。功率流量计算用于确定系统的稳态变量,例如每个总线的电压幅度/相位角以及每个分支上的主动/无功流量。直流电流模型是一种流行的线性电流模型,广泛应用于电力行业。虽然它是快速且稳健的,但它可能导致一些关键传输线的线流量产生不准确的线流。可以通过利用历史网格配置文件的数据驱动方法部分地解决该缺陷。在本文中,训练了神经网络(NN)模型以预测使用历史电力系统数据来预测电力流量结果。虽然培训过程可能需要时间,但一旦训练,估计线流是非常快的。采用了所提出的基于NN的功率流模型和传统的直流电流模型之间的综合性能分析。可以得出结论,所提出的基于NN的电力流模型可以比直流电流模型快速更准确地找到解决方案。
translated by 谷歌翻译
安全限制的单位承诺(SCUC)用于电力系统的日期前一代调度是一个混合整数的线性编程问题,该问题是计算密集的。良好的热启动解决方案或减少SCUC模型可以节省大量的时间。在这项工作中,提出了一种新的方法来有效地利用机器学习(ML)来提供良好的起始解决方案和/或降低SCUC的问题大小。使用历史节点需求配置文件和各自的承诺计划提出和培训使用逻辑回归算法的ML模型。处理并分析ML输出以辅助SCUC。拟议的方法是在几个标准测试系统上验证的,即IEEE 24-Bus系统,IEEE 73总线系统,IEEE 118总线系统,合成南卡罗来纳500公交系统,以及波兰2383总线系统。仿真结果表明,来自所提出的机器学习模型的预测可以提供良好的热启动解决方案和/或减少SCUC中的变量数量和限制,以及解决方案质量的最小损耗,同时大大减少计算时间。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译