Nesterov的加速准牛顿(L)Naq方法已经显示出在几个神经网络(NN)应用中使用Nesterov的加速梯度加速了传统(L)BFGS准牛顿方法。然而,每个迭代的两个梯度的计算增加了计算成本。动量加速的准牛顿(MOQ)方法表明,Nesterov的加速梯度可以近似为过去梯度的线性组合。此摘要将MoQ近似扩展到有限的内存NAQ并评估函数近似问题的性能。
translated by 谷歌翻译
Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the limitations of relying solely on their parameters to encode a wealth of world knowledge. This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments of 10 models and 4 augmentation methods on PopQA, our new open-domain QA dataset with 14k questions. We find that LMs struggle with less popular factual knowledge, and that scaling fails to appreciably improve memorization of factual knowledge in the tail. We then show that retrieval-augmented LMs largely outperform orders of magnitude larger LMs, while unassisted LMs remain competitive in questions about high-popularity entities. Based on those findings, we devise a simple, yet effective, method for powerful and efficient retrieval-augmented LMs, which retrieves non-parametric memories only when necessary. Experimental results show that this significantly improves models' performance while reducing the inference costs.
translated by 谷歌翻译
Previous studies have explored generating accurately lip-synced talking faces for arbitrary targets given audio conditions. However, most of them deform or generate the whole facial area, leading to non-realistic results. In this work, we delve into the formulation of altering only the mouth shapes of the target person. This requires masking a large percentage of the original image and seamlessly inpainting it with the aid of audio and reference frames. To this end, we propose the Audio-Visual Context-Aware Transformer (AV-CAT) framework, which produces accurate lip-sync with photo-realistic quality by predicting the masked mouth shapes. Our key insight is to exploit desired contextual information provided in audio and visual modalities thoroughly with delicately designed Transformers. Specifically, we propose a convolution-Transformer hybrid backbone and design an attention-based fusion strategy for filling the masked parts. It uniformly attends to the textural information on the unmasked regions and the reference frame. Then the semantic audio information is involved in enhancing the self-attention computation. Additionally, a refinement network with audio injection improves both image and lip-sync quality. Extensive experiments validate that our model can generate high-fidelity lip-synced results for arbitrary subjects.
translated by 谷歌翻译
While the NLP community is generally aware of resource disparities among languages, we lack research that quantifies the extent and types of such disparity. Prior surveys estimating the availability of resources based on the number of datasets can be misleading as dataset quality varies: many datasets are automatically induced or translated from English data. To provide a more comprehensive picture of language resources, we examine the characteristics of 156 publicly available NLP datasets. We manually annotate how they are created, including input text and label sources and tools used to build them, and what they study, tasks they address and motivations for their creation. After quantifying the qualitative NLP resource gap across languages, we discuss how to improve data collection in low-resource languages. We survey language-proficient NLP researchers and crowd workers per language, finding that their estimated availability correlates with dataset availability. Through crowdsourcing experiments, we identify strategies for collecting high-quality multilingual data on the Mechanical Turk platform. We conclude by making macro and micro-level suggestions to the NLP community and individual researchers for future multilingual data development.
translated by 谷歌翻译
We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware retrieval system using multi-task instruction tuning, which can follow human-written instructions to find the best documents for a given query. We introduce the first large-scale collection of approximately 40 retrieval datasets with instructions, BERRI, and present TART, a multi-task retrieval system trained on BERRI with instructions. TART shows strong capabilities to adapt to a new retrieval task via instructions and advances the state of the art on two zero-shot retrieval benchmarks, BEIR and LOTTE, outperforming models up to three times larger. We further introduce a new evaluation setup, X^2-Retrieval to better reflect real-world scenarios, where diverse domains and tasks are pooled and a system needs to find documents aligning users' intents. In this setup, TART significantly outperforms competitive baselines, further demonstrating the effectiveness of guiding retrieval with instructions.
translated by 谷歌翻译
本文探讨了时间视频接地(TVG)的任务,在该任务中,给定未修剪的视频和查询句子,目标是在提供的自然语言查询描述的视频中识别和确定动作实例的时间界。最近的作品通过使用大型预训练的语言模型(PLM)直接编码查询来解决此任务。但是,很难隔离改进的语言表示的影响,因为这些作品还提出了视觉输入的改进。此外,这些PLM大大增加了训练TVG模型的计算成本。因此,本文研究了PLM在TVG任务中的影响,并根据适配器评估了NLP参数效率培训替代方案的适用性。我们将流行的PLM与选择现有方法和测试不同的适配器相结合,以减少其他参数的影响。我们在三个具有挑战性的数据集上的结果表明,当TVG模型对该任务进行微调时,可以从PLM中受益匪浅,并且适配器是完全微调的有效替代方法,即使它们并不适合我们的任务。具体而言,适配器有助于节省计算成本,从而使PLM集成在较大的TVG模型中,并提供与最先进模型相当的结果。最后,通过对TVG中不同类型的适配器进行基准测试,我们的结果阐明了哪种适配器最适合每个研究的情况。
translated by 谷歌翻译
象征性的AI社区越来越多地试图在神经符号结构中接受机器学习,但由于文化障碍,仍在挣扎。为了打破障碍,这份相当有思想的个人备忘录试图解释和纠正统计,机器学习和深入学习的惯例,从局外人的角度进行深入学习。它提供了一个分步协议,用于设计一个机器学习系统,该系统满足符号AI社区认真对待所必需的最低理论保证,即,它讨论“在哪些条件下,我们可以停止担心和接受统计机器学习。 “一些亮点:大多数教科书都是为计划专门研究STAT/ML/DL的人编写的,应该接受术语。该备忘录适用于经验丰富的象征研究人员,他们听到了很多嗡嗡声,但仍然不确定和持怀疑态度。有关STAT/ML/DL的信息目前太分散或嘈杂而无法投资。此备忘录优先考虑紧凑性,并特别注意与象征性范式相互共鸣的概念。我希望这份备忘录能节省时间。它优先考虑一般数学建模,并且不讨论任何特定的函数近似器,例如神经网络(NNS),SVMS,决策树等。它可以对校正开放。将此备忘录视为与博客文章相似的内容,采用有关Arxiv的论文的形式。
translated by 谷歌翻译
我们介绍了Realtime QA,这是一个动态的问答(QA)平台,该平台宣布问题并定期评估系统(此版本每周)。实时质量检查询问当前世界,质量检查系统需要回答有关新事件或信息的问题。因此,它挑战了QA数据集中的静态,常规假设,并追求瞬时应用。我们在包括GPT-3和T5在内的大型语言模型上建立了强大的基线模型。我们的基准是一项持续的努力,该初步报告在过去一个月中提出了实时评估结果。我们的实验结果表明,GPT-3通常可以根据新的退休文档正确更新其生成结果,从而突出了最新信息检索的重要性。尽管如此,我们发现GPT-3倾向于在检索文件时返回过时的答案,这些文件没有提供足够的信息来找到答案。这表明了未来研究的重要途径:开放式域质量检查系统是否可以确定无法回答的案例,并与用户甚至检索模块进行通信以修改检索结果?我们希望实时质量检查能够刺激问题答案及其他问题的瞬时应用。
translated by 谷歌翻译
我们介绍了关于多语言信息访问(MIA)2022共享任务的研讨会的结果,评估了16种类型上多样性的语言中的跨语性开放回程答案(QA)系统。在此任务中,我们在14种类型上多样化的语言中调整了两个大规模的跨语性开放式质疑QA数据集,并使用了2种代表性不足的语言中的新注释的开放式QA数据:Tagalog和Tamil。四个团队提交了他们的系统。利用迭代开采的最佳系统是不同的负面示例和较大的预审慎模型达到32.2 F1,表现优于我们的基线4.5分。第二最佳系统使用实体感知的上下文化表示文档检索,并在泰米尔语(20.8 F1)方面取得了重大改进,而其他大多数系统的得分几乎为零。
translated by 谷歌翻译
卷积神经网络(CNN)的鲁棒性存在一些问题。例如,可以通过向输入中添加少量噪声来更改CNN的预测,当输入分布通过在训练过程中从未见过的转换移动时,CNN的性能会降解(例如,模糊效应)。有一些方法可以用二进制嵌入替代像素值,以解决对抗性扰动的问题,从而成功改善了鲁棒性。在这项工作中,我们将像素提出到二进制嵌入(P2BE)以提高CNN的鲁棒性。P2BE是一种可学习的二进制嵌入方法,而不是先前的手工编码的二进制嵌入方法。P2BE在训练过程中未显示的对抗性扰动和视觉损坏方面的其他二进制嵌入方法优于其他二进制嵌入方法。
translated by 谷歌翻译