The paper describes the work that has been submitted to the 5th workshop on Challenges and Applications of Automated Extraction of socio-political events from text (CASE 2022). The work is associated with Subtask 1 of Shared Task 3 that aims to detect causality in protest news corpus. The authors used different large language models with customized cross-entropy loss functions that exploit annotation information. The experiments showed that bert-based-uncased with refined cross-entropy outperformed the others, achieving a F1 score of 0.8501 on the Causal News Corpus dataset.
translated by 谷歌翻译
本文旨在研究入侵攻击,然后为区块链网络开发新的网络攻击检测框架。具体来说,我们首先在实验室设计和实施区块链网络。该区块链网络将实现两个目的,即为我们的学习模型生成真实的流量数据(包括正常数据和攻击数据),并实施实时实验,以评估我们建议的入侵检测框架的性能。据我们所知,这是第一个在区块链网络中用于网络攻击的实验室中合成的数据集。然后,我们提出了一个新颖的协作学习模型,该模型允许区块链网络中的有效部署来检测攻击。提出的学习模型的主要思想是使区块链节点能够积极收集数据,从其数据中分享知识,然后与网络中的其他区块链节点交换知识。这样,我们不仅可以利用网络中所有节点的知识,而且还不需要收集所有原始数据进行培训,以便在常规的集中学习解决方案等集中式节点上进行培训。这样的框架还可以避免暴露本地数据的隐私以及过多的网络开销/拥堵的风险。密集模拟和实时实验都清楚地表明,我们提出的基于协作的入侵检测框架可以在检测攻击方面达到高达97.7%的准确性。
translated by 谷歌翻译
联邦学习(FL)最近成为网络攻击检测系统的有效方法,尤其是在互联网上(物联网)网络。通过在IOT网关中分配学习过程,FL可以提高学习效率,降低通信开销并增强网络内人检测系统的隐私。在这种系统中实施FL的挑战包括不同物联网中的数据特征的标记数据和不可用的不可用。在本文中,我们提出了一种新的协作学习框架,利用转移学习(TL)来克服这些挑战。特别是,我们开发一种新颖的协作学习方法,使目标网络能够有效地和快速学习来自拥有丰富标记数据的源网络的知识。重要的是,最先进的研究要求网络的参与数据集具有相同的特征,从而限制了入侵检测系统的效率,灵活性以及可扩展性。但是,我们所提出的框架可以通过在各种深度学习模型中交换学习知识来解决这些问题,即使他们的数据集具有不同的功能。关于最近的真实网络安全数据集的广泛实验表明,与基于最先进的深度学习方法相比,拟议的框架可以提高超过40%。
translated by 谷歌翻译
与小组元素的作用一样,在数学中通常用于分析或利用给定问题设置中固有的对称性。在这里,我们提供有效的量子算法,用于对存储为量子状态的数据进行线性组卷积和互相关。我们的算法的运行时间在组的维度上是对数,因此与经典算法相比,当输入数据作为量子状态和线性操作提供良好的条件时,提供了指数加速。我们的理论框架是出于解决代数问题的量子算法的丰富文献,为量化机器学习和采用小组操作的数值方法中的许多算法开辟了一条途径。
translated by 谷歌翻译
最近的一项工作已经通过神经切线核(NTK)分析了深神经网络的理论特性。特别是,NTK的最小特征值与记忆能力,梯度下降算法的全球收敛性和深网的概括有关。但是,现有结果要么在两层设置中提供边界,要么假设对于多层网络,将NTK矩阵的频谱从0界限为界限。在本文中,我们在无限宽度和有限宽度的限制情况下,在最小的ntk矩阵的最小特征值上提供了紧密的界限。在有限宽度的设置中,我们认为的网络体系结构相当笼统:我们需要大致订购$ n $神经元的宽层,$ n $是数据示例的数量;剩余层宽度的缩放是任意的(取决于对数因素)。为了获得我们的结果,我们分析了各种量的独立兴趣:我们对隐藏特征矩阵的最小奇异值以及输入输出特征图的Lipschitz常数上的上限给出了下限。
translated by 谷歌翻译
Here, we demonstrate how machine learning enables the prediction of comonomers reactivity ratios based on the molecular structure of monomers. We combined multi-task learning, multi-inputs, and Graph Attention Network to build a model capable of predicting reactivity ratios based on the monomers chemical structures.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译
RTE is a significant problem and is a reasonably active research community. The proposed research works on the approach to this problem are pretty diverse with many different directions. For Vietnamese, the RTE problem is moderately new, but this problem plays a vital role in natural language understanding systems. Currently, methods to solve this problem based on contextual word representation learning models have given outstanding results. However, Vietnamese is a semantically rich language. Therefore, in this paper, we want to present an experiment combining semantic word representation through the SRL task with context representation of BERT relative models for the RTE problem. The experimental results give conclusions about the influence and role of semantic representation on Vietnamese in understanding natural language. The experimental results show that the semantic-aware contextual representation model has about 1% higher performance than the model that does not incorporate semantic representation. In addition, the effects on the data domain in Vietnamese are also higher than those in English. This result also shows the positive influence of SRL on RTE problem in Vietnamese.
translated by 谷歌翻译
To the best of our knowledge, this paper made the first attempt to answer whether word segmentation is necessary for Vietnamese sentiment classification. To do this, we presented five pre-trained monolingual S4- based language models for Vietnamese, including one model without word segmentation, and four models using RDRsegmenter, uitnlp, pyvi, or underthesea toolkits in the pre-processing data phase. According to comprehensive experimental results on two corpora, including the VLSP2016-SA corpus of technical article reviews from the news and social media and the UIT-VSFC corpus of the educational survey, we have two suggestions. Firstly, using traditional classifiers like Naive Bayes or Support Vector Machines, word segmentation maybe not be necessary for the Vietnamese sentiment classification corpus, which comes from the social domain. Secondly, word segmentation is necessary for Vietnamese sentiment classification when word segmentation is used before using the BPE method and feeding into the deep learning model. In this way, the RDRsegmenter is the stable toolkit for word segmentation among the uitnlp, pyvi, and underthesea toolkits.
translated by 谷歌翻译