我们提出语言学家,这是一种通过微调Alexatm 5B生成带注释数据的方法,用于生成意图分类和插槽标记(IC+ST),这是一种5亿参数的多语言序列到序列(SEQ2SEQ)模型,在灵活的指令上迅速的。在SNIP数据集的10次新颖意图设置中,语言学家超过了最新的方法(反向翻译和示例外推),可以通过宽阔的边距,显示出IC回忆中+1.9点的目标意图的绝对改善ST F1分数和+2.5分。在MATIS ++数据集的零击跨语言设置中,语言学家表现出强大的机器翻译基线,插槽对齐的基线是+4.14的+4.14点在6个语言上绝对在ST F1分数上,同时在IC上匹配IC的性能。最后,我们在用于对话代理IC+ST的内部大规模多语言数据集上验证了我们的结果,并显示了使用背面翻译,释义和插槽目录重新采样采样的基线的显着改进。据我们所知,我们是第一个展示大规模SEQ2SEQ模型的指导微调的人,以控制多语言意图和插槽标记的数据生成的输出。
translated by 谷歌翻译
在这项工作中,我们证明了多种语的大规模序列到序列(SEQ2SEQ)模型,该模型是通过Denoising和因果语言建模(CLM)任务的混合物进行训练的,比仅解码器模型更有效地进行了效率的学习者在各种任务上。特别是,我们培训了一个名为Alexa教师模型(Alexatm 20b)的200亿个参数多语言SEQ2SEQ模型,并表明它在1-Shot摘要任务上实现了最先进的(SOTA)性能,超过了更大的540B PALM DOPODER模型。 Alexatm 20b还可以在1-Shot Machine翻译中实现SOTA,尤其是对于低资源语言,几乎所有语言对(阿拉伯语,英语,法语,德语,德语,印地语,意大利语,日语,以及flores-101数据集上的泰卢固语)。我们还显示了零拍设置,AlexATM 20B在SuperGlue和SqueadV2数据集上的表现优于GPT3(175B),并在XNLI,XCOPA,PAWS-X和XWINOGRAD等多语言任务上提供SOTA性能。总体而言,我们的结果为SEQ2SEQ模型提供了一个令人信服的案例,作为大型语言模型(LLM)培训的仅解码器模型的强大替代方法。
translated by 谷歌翻译
我们介绍了一个大规模实验,该实验对编码器进行了预处理,其参数计数范围从700m到9.3b不等,随后蒸馏到较小的型号中,范围为17m-170亿参数,其应用到自然语言理解(NLU)组件(NLU)组件(虚拟助手系统。尽管我们使用70%的口语数据训练,但在对书面形式的跨语性自然语言推论(XNLI)语料库进行评估时,我们的教师模型与XLM-R和MT5相当。我们使用系统中的内域数据对教师模型进行了第二阶段的训练,以提高了3.86%的相对分类,而相对7.01%的插槽填充。我们发现,即使是从我们的2阶段教师模型中提取的170亿参数模型,与仅接受公共数据的2.3B参数老师相比,与2.3B参数老师相比,意图分类更好2.88%,并且7.69%的插槽填充错误率更好(第1阶段),强调了。内域数据对训练的重要性。当使用标记的NLU数据进行离线评估时,我们的17m参数阶段2蒸馏模型的表现分别优于XLM-R碱基(85m Params)和Distillbert(42m Params),分别优于4.23%至6.14%。最后,我们介绍了一个完整的虚拟助手实验平台的结果,在该平台中,我们发现使用经过预训练和蒸馏管道训练的模型超过了从8500万参数教师蒸馏的模型,在自动测量全系统用户不满的自动测量中,从8500万参数教师蒸馏出3.74%-4.91%。
translated by 谷歌翻译
Learning models are highly dependent on data to work effectively, and they give a better performance upon training on big datasets. Massive research exists in the literature to address the dataset adequacy issue. One promising approach for solving dataset adequacy issues is the data augmentation (DA) approach. In DA, the amount of training data instances is increased by making different transformations on the available data instances to generate new correct and representative data instances. DA increases the dataset size and its variability, which enhances the model performance and its prediction accuracy. DA also solves the class imbalance problem in the classification learning techniques. Few studies have recently considered DA in the Arabic language. These studies rely on traditional augmentation approaches, such as paraphrasing by using rules or noising-based techniques. In this paper, we propose a new Arabic DA method that employs the recent powerful modeling technique, namely the AraGPT-2, for the augmentation process. The generated sentences are evaluated in terms of context, semantics, diversity, and novelty using the Euclidean, cosine, Jaccard, and BLEU distances. Finally, the AraBERT transformer is used on sentiment classification tasks to evaluate the classification performance of the augmented Arabic dataset. The experiments were conducted on four sentiment Arabic datasets, namely AraSarcasm, ASTD, ATT, and MOVIE. The selected datasets vary in size, label number, and unbalanced classes. The results show that the proposed methodology enhanced the Arabic sentiment text classification on all datasets with an increase in F1 score by 4% in AraSarcasm, 6% in ASTD, 9% in ATT, and 13% in MOVIE.
translated by 谷歌翻译
In addition to its public health crisis, COVID-19 pandemic has led to the shutdown and closure of workplaces with an estimated total cost of more than $16 trillion. Given the long hours an average person spends in buildings and indoor environments, this research article proposes data-driven control strategies to design optimal indoor airflow to minimize the exposure of occupants to viral pathogens in built environments. A general control framework is put forward for designing an optimal velocity field and proximal policy optimization, a reinforcement learning algorithm is employed to solve the control problem in a data-driven fashion. The same framework is used for optimal placement of disinfectants to neutralize the viral pathogens as an alternative to the airflow design when the latter is practically infeasible or hard to implement. We show, via simulation experiments, that the control agent learns the optimal policy in both scenarios within a reasonable time. The proposed data-driven control framework in this study will have significant societal and economic benefits by setting the foundation for an improved methodology in designing case-specific infection control guidelines that can be realized by affordable ventilation devices and disinfectants.
translated by 谷歌翻译
The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively under-studied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points.
translated by 谷歌翻译
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .
translated by 谷歌翻译
成功的点云注册依赖于在强大的描述符上建立的准确对应关系。但是,现有的神经描述符要么利用旋转变化的主链,其性能在较大的旋转下下降,要么编码局部几何形状,而局部几何形状不太明显。为了解决这个问题,我们介绍Riga以学习由设计和全球了解的旋转不变的描述符。从稀疏局部区域的点对特征(PPF)中,旋转不变的局部几何形状被编码为几何描述符。随后,全球对3D结构和几何环境的认识都以旋转不变的方式合并。更具体地说,整个框架的3D结构首先由我们的全球PPF签名表示,从中学到了结构描述符,以帮助几何描述符感知本地区域以外的3D世界。然后将整个场景的几何上下文全局汇总到描述符中。最后,将稀疏区域的描述插值到密集的点描述符,从中提取对应关系进行注册。为了验证我们的方法,我们对对象和场景级数据进行了广泛的实验。在旋转较大的情况下,Riga就模型Net40的相对旋转误差而超过了最先进的方法8 \度,并将特征匹配的回忆提高了3DLOMATCH上的至少5个百分点。
translated by 谷歌翻译
最近,由于其对交通清算的重大影响,交通事故风险预测的问题一直引起了智能运输系统社区的关注。通过使用数据驱动的方法来对空间和时间事件的影响进行建模,因此在文献中通常可以解决此问题,因为它们被证明对于交通事故风险预测问题至关重要。为了实现这一目标,大多数方法构建了不同的体系结构以捕获时空相关性功能,从而使它们对大型交通事故数据集效率低下。因此,在这项工作中,我们提出了一个新颖的统一框架,即是上下文视觉变压器,可以通过端到端的方法进行培训,该方法可以有效地建议问题的空间和时间方面,同时提供准确的交通事故。风险预测。我们评估并比较了我们提出的方法的性能与来自两个不同地理位置的两个大规模交通事故数据集的文献的基线方法。结果表明,与文献中先前的最新作品(SOTA)相比,RMSE得分的重大改善大约为2 \%。此外,我们提出的方法在两个数据集上优于SOTA技术,而仅需要少23倍的计算要求。
translated by 谷歌翻译
由于时空事件发生的随机性,在报告的交通中断开始时缺乏信息,并且缺乏运输工程的高级方法来从过去中获得见解,因此预测交通事故持续时间是一个难题事故。本文提出了一个新的Fusion框架,用于通过将机器学习与交通流量/速度和事件描述作为功能进行集成来预测有限信息的事件持续时间,并通过多种深度​​学习方法编码(ANN AUTOCONEDER和角色级别的LSTM-ANN情绪分类器)。该论文在运输和数据科学中构建了跨学科建模方法。该方法提高了适用于基线事件报告的最佳表现ML模型的入射持续时间预测准确性。结果表明,与标准线性或支持矢量回归模型相比,我们提出的方法可以提高准确性$ 60 \%$,并且相对于混合深度学习自动编码的GBDT模型的另外7美元\%$改进,这似乎胜过表现所有其他模型。应用区是旧金山市,富含交通事件日志(全国交通事故数据集)和过去的历史交通拥堵信息(Caltrans绩效测量系统的5分钟精度测量)。
translated by 谷歌翻译