With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Frame Semantic Role Labeling (FSRL) identifies arguments and labels them with frame semantic roles defined in FrameNet. Previous researches tend to divide FSRL into argument identification and role classification. Such methods usually model role classification as naive multi-class classification and treat arguments individually, which neglects label semantics and interactions between arguments and thus hindering performance and generalization of models. In this paper, we propose a query-based framework named ArGument Extractor with Definitions in FrameNet (AGED) to mitigate these problems. Definitions of frames and frame elements (FEs) in FrameNet can be used to query arguments in text. Encoding text-definition pairs can guide models in learning label semantics and strengthening argument interactions. Experiments show that AGED outperforms previous state-of-the-art by up to 1.3 F1-score in two FrameNet datasets and the generalization power of AGED in zero-shot and fewshot scenarios. Our code and technical appendix is available at https://github.com/PKUnlp-icler/AGED.
translated by 谷歌翻译
在本文中,我们在CAMRP-2022评估中提供了对系统的详细说明。我们首先提出了一种两阶段的方法来进行中文AMR解析,并产生对齐,其中包括概念预测和关系预测阶段。我们的型号在CAMR 2.0测试集上获得了0.7756和0.7074对齐的F1分数,并单独使用CAMRP-2022的盲目测试集。我们还分析了结果和限制,例如误差传播和阶级失衡问题,我们在当前方法中得出结论。代码和训练有素的模型将在https://github.com/pkunlp-icler/two-stage-camrp上发布,用于复制。
translated by 谷歌翻译
命名实体识别是定位和分类文本中的实体的任务。但是,NER数据集中未标记的实体问题严重阻碍了NER性能的改善。本文建议SCL-RAI解决这个问题。首先,我们通过基于跨度的对比学习来减少相同标签的跨度表示的距离,同时为不同的标签增加了跨度表示,从而减轻了实体之间的歧义并提高了模型对未标记的实体的稳健性。然后,我们提出检索增强推理,以减轻决策边界转移问题。我们的方法在两个现实世界数据集上大大优于先前的SOTA方法的F1分数4.21%和8.64%。
translated by 谷歌翻译
框架语义解析是一项基本的NLP任务,由三个子任务组成:框架标识,参数识别和角色分类。以前的大多数研究都倾向于忽略不同子任务与论点之间的关系,并且很少关注Framenet中定义的本体论框架知识。在本文中,我们提出了一个带有双层(KID)的知识引导的增量语义解析器。我们首先介绍框架知识图(FKG),这是一个构建框架知识上构建的框架和FES(帧元素)的异质图,以便我们可以得出框架和FES的知识增强表示。此外,我们提出了框架语义图(FSG)来表示用图形结构从文本中提取的框架语义结构。通过这种方式,我们可以将框架语义解析转变为增量图构造问题,以加强子任务之间的相互作用和参数之间的关系。我们的实验表明,在两个Framenet数据集上,KID的表现优于先前的最新方法1.7 f1得分。我们的代码可在https://github.com/pkunlp-icler/kid上使用。
translated by 谷歌翻译
预先接受的语言模型(PLMS)在预训练和微调范式下,在各种自然语言处理(NLP)任务中取得了巨大成功。具有大量参数,PLMS是计算密集型和资源饥饿的。因此,已经引入了模型修剪来压缩大规模的PLM。然而,大多数先前的方法只考虑对下游任务的任务特定知识,但忽略了修剪期间的基本任务无关知识,这可能导致灾难性的遗忘问题并导致普遍性较差。为了在我们的修剪模型中维护任务不可行的特定知识,我们提出了在预训练和微调范式下的对比修剪(盖子)。它设计为一​​般框架,与结构化和非结构化修剪兼容。统一的对比学习,CAP使修剪模型能够从预训练的模型中学到任务无关的知识,以及特定于任务知识的微调模型。此外,为了更好地保留修剪模型的性能,快照(即,每个修剪迭代的中间模型)也是修剪的有效监督。我们广泛的实验表明,采用盖子一致地产生显着的改善,特别是在极高的稀疏性方案中。只有3%的型号参数保留(即97%的稀疏性),CAP成功达到了QQP和MNLI任务的原始BERT性能的99.2%和96.3%。此外,我们的探测实验表明,CAP修剪的模型趋于达到更好的泛化能力。
translated by 谷歌翻译
方面情绪三重态提取(Aste)旨在识别目标,他们的情感极化和意见解释句子的情绪。 Aste可以自然地分为3个原子子组织,即目标检测,意见检测和情绪分类。我们认为针对目标 - 意见对的合适的子任务组合,组成特征提取,以及子任务之间的互动将是成功的关键。然而,由于缺陷的子任务制定,子最优特征表示或缺少子任务相互作用,在“一对多”或“多对一”的情况下可能导致不存在的情绪三体,或导出不存在的情绪三元组。在本文中,我们将Aste划分为目标 - 意见联合检测和情绪分类子任务,这与人类认知符合,并且相应地利用序列编码器和表编码器来处理它们。表编码器在令牌对等级提取情绪,从而可以容易地捕获目标和意见之间的组成特征。要在子任务之间建立显式交互,我们利用表格表示来指导序列编码,并将序列功能注入到表编码器中。实验表明,我们的模型在六个受欢迎的ASTE数据集中优于最先进的方法。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译