自动诊断引起了越来越多的关注,但由于多步推理,仍然挑战。最近的作品通常通过强化学习方法来解决它。但是,这些方法显示出低效率并要求Taskspecific奖励功能。考虑到医生与患者之间的谈话允许医生探讨症状并进行诊断,诊断过程可以自然地视为包括症状和诊断的序列的产生。受此启发,我们将自动诊断重构为症状序列生成(SG)任务,并提出了一种基于变压器(沟通器)的简单但有效的自动诊断模型。我们首先设计了症状关注框架,以了解症状查询和疾病诊断的产生。为了减轻序贯生成和隐含症状紊乱之间的差异,我们进一步设计了三种无价的培训机制。三个公共数据集的实验表明,我们的模型以最高的培训效率为1%,6%和11.5%的疾病诊断表现出基础。详细分析症状查询预测表明,应用症状序列生成自动诊断的可能性。
translated by 谷歌翻译
Diagnosis-oriented dialogue system queries the patient's health condition and makes predictions about possible diseases through continuous interaction with the patient. A few studies use reinforcement learning (RL) to learn the optimal policy from the joint action space of symptoms and diseases. However, existing RL (or Non-RL) methods cannot achieve sufficiently good prediction accuracy, still far from its upper limit. To address the problem, we propose a decoupled automatic diagnostic framework DxFormer, which divides the diagnosis process into two steps: symptom inquiry and disease diagnosis, where the transition from symptom inquiry to disease diagnosis is explicitly determined by the stopping criteria. In DxFormer, we treat each symptom as a token, and formalize the symptom inquiry and disease diagnosis to a language generation model and a sequence classification model respectively. We use the inverted version of Transformer, i.e., the decoder-encoder structure, to learn the representation of symptoms by jointly optimizing the reinforce reward and cross entropy loss. Extensive experiments on three public real-world datasets prove that our proposed model can effectively learn doctors' clinical experience and achieve the state-of-the-art results in terms of symptom recall and diagnostic accuracy.
translated by 谷歌翻译
医疗自动诊断系统旨在模仿真实诊断过程中的人类医生。该任务被制定为具有症状探究和疾病诊断的顺序决策问题。近年来,许多研究人员使用了加强学习方法来处理这项任务。然而,最近的作品忽略了区分症状询问和疾病诊断行为并将它们混合到一个动作空间中。这导致对此任务的加固学习方法表现不满。此外,缺乏包含各种疾病和相应信息的公共评估数据集。为了解决这些问题,我们首先提出了一种新的医学自动诊断方法,分别与症状查询和疾病诊断分别作为加强学习任务和分类任务。我们还提出了一种强大而自适应的方法,可以使用分布熵作为媒体对齐两项任务。然后,我们创建从MedlinePlus知识库中提取的新数据集。数据集包含更多疾病和更完整的症状信息。模拟的实验患者更加现实。实验评价结果表明,我们的方法通过几个询问转弯实现了更高的医学诊断准确性,我们的方法在不同的数据集中优于三种最先进的方法。
translated by 谷歌翻译
对于那些在线寻求医疗保健建议的人,能够与患者进行自动疾病诊断的基于AI的对话代理是一个可行的选择。该应用需要有效地查询相关疾病症状,以便进行准确的诊断建议。可以将其作为顺序特征(症状)选择和分类的问题进行表述,并为其作为自然解决方案提出了增强学习方法(RL)方法。当特征空间很小时,它们的表现良好,也就是说,症状的数量和可诊断性疾病类别的数量有限,但是它们经常失败的作业,具有大量特征。为了应对这一挑战,我们提出了一个由生成演员网络和诊断评论家网络组成的多模型融合的演员 - 批评者(MMF-AC)RL框架。演员融合了变异自动编码器(VAE),以对特征部分观察结果引起的不确定性进行建模,从而促进进行适当的查询。在评论家网络中,涉及疾病预测的监督诊断模型,以精确估计状态值功能。此外,受鉴别诊断的医学概念的启发,我们结合了生成和诊断模型,以创建一种新颖的奖励成型机制,以解决大型搜索空间中稀疏奖励问题。我们对合成数据集和现实数据集进行了广泛的实验,以进行经验评估。结果表明,我们的方法在诊断准确性和互动效率方面优于最先进的方法,同时更有效地可扩展到大型搜索空间。此外,我们的方法适用于分类和连续功能,使其非常适合在线应用程序。
translated by 谷歌翻译
In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience. In this article, we propose two frameworks to support automatic medical consultation, namely doctor-patient dialogue understanding and task-oriented interaction. We create a new large medical dialogue dataset with multi-level finegrained annotations and establish five independent tasks, including named entity recognition, dialogue act classification, symptom label inference, medical report generation and diagnosis-oriented dialogue policy. We report a set of benchmark results for each task, which shows the usability of the dataset and sets a baseline for future studies. Both code and data is available from https://github.com/lemuria-wchen/imcs21.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
药物建议是智能医疗系统的关键任务。先前的研究主要建议使用电子健康记录(EHRS)药物。但是,在EHR中可能会忽略或忽略医生与患者之间的相互作用的一些细节,这对于自动药物建议至关重要。因此,我们首次尝试通过医生和患者之间的对话推荐药物。在这项工作中,我们构建了Dialmed,这是第一个用于基于医学对话的药物建议任务的高质量数据集。它包含与3个部门的16种常见疾病和70种相应常见药物有关的11,996次医疗对话。此外,我们提出了对话结构和疾病知识意识网络(DDN),其中QA对话图机制旨在模拟对话结构,并使用知识图来引入外部疾病知识。广泛的实验结果表明,所提出的方法是推荐与医疗对话的药物的有前途的解决方案。该数据集和代码可在https://github.com/f-window/dialmed上找到。
translated by 谷歌翻译
医疗对话系统(MDSS)旨在协助医生和患者一系列专业医疗服务,即诊断,咨询和治疗。但是,一站式MDS仍然是未开发的,因为:(1)没有数据集如此大规模对话包含多种医疗服务和细粒度的医疗标签(即,意图,插槽,值); (2)没有模型已经根据统一框架中的多服务对话解决了MDS。在这项工作中,我们首先建立一个多域多次服务医学对话(M ^ 2-Meddialog)数据集,其中包含医生和患者的1,557种对话,涵盖276种疾病,2,468种医学实体和3种医疗服务专业。据我们所知,它是唯一包括多种医疗服务和细粒度医疗标签的医疗对话数据集。然后,我们将一站式MDS制定为序列到序列生成问题。我们分别统一MDS,具有因果语言建模和条件因果语言建模。具体而言,我们采用了几种预磨料模型(即,Bert-WWM,BERT-MED,GPT2和MT5)及其变体,以在M ^ 2-MedDialog数据集上获取基准。我们还提出了伪标签和自然扰动方法来扩展M2-MedDialog数据集,并增强最先进的预磨损模型。我们展示了到目前为止通过对M2-MEDDIALOG的大量实验来实现的结果。我们释放DataSet,代码以及评估脚本,以促进在这方面的未来研究。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
代码摘要可帮助开发人员理解程序并减少在软件维护过程中推断程序功能的时间。最近的努力诉诸深度学习技术,例如序列到序列模型,以生成准确的代码摘要,其中基于变压器的方法已实现了有希望的性能。但是,在此任务域中,有效地将代码结构信息集成到变压器中的情况不足。在本文中,我们提出了一种名为SG-Trans的新方法,将代码结构属性纳入变压器。具体而言,我们将局部符号信息(例如,代码令牌和语句)和全局句法结构(例如,数据流程图)注入变压器的自我发项模块中。为了进一步捕获代码的层次结构特征,局部信息和全局结构旨在分布在下层和变压器高层的注意力头中。广泛的评估表明,SG-trans的表现优于最先进的方法。与表现最佳的基线相比,SG-Trans在流星评分方面仍然可以提高1.4%和2.0%,这是一个广泛用于测量发电质量的度量,分别在两个基准数据集上。
translated by 谷歌翻译
Persona-based dialogue systems aim to generate consistent responses based on historical context and predefined persona. Unlike conventional dialogue generation, the persona-based dialogue needs to consider both dialogue context and persona, posing a challenge for coherent training. Specifically, this requires a delicate weight balance between context and persona. To achieve that, in this paper, we propose an effective framework with Persona-Adaptive Attention (PAA), which adaptively integrates the weights from the persona and context information via our designed attention. In addition, a dynamic masking mechanism is applied to the PAA to not only drop redundant information in context and persona but also serve as a regularization mechanism to avoid overfitting. Experimental results demonstrate the superiority of the proposed PAA framework compared to the strong baselines in both automatic and human evaluation. Moreover, the proposed PAA approach can perform equivalently well in a low-resource regime compared to models trained in a full-data setting, which achieve a similar result with only 20% to 30% of data compared to the larger models trained in the full-data setting. To fully exploit the effectiveness of our design, we designed several variants for handling the weighted information in different ways, showing the necessity and sufficiency of our weighting and masking designs.
translated by 谷歌翻译
将对话状态跟踪(DST)概括为新数据特别具有挑战性,因为在培训过程中对丰富和细粒度的监督非常依赖。样本稀疏性,分布转移以及新概念和主题的发生经常导致推理期间的严重降级。在本文中,我们提出了一种培训策略,以构建提取性DST模型,而无需精细颗粒的手动跨度标签。两种新型的输入级辍学方法减轻了样品稀疏性的负面影响。我们提出了一种具有统一编码器的新模型体系结构,该架构通过利用注意机制来支持价值和插槽独立性。我们结合了三复制策略DST的优势和价值匹配,以从互补的预测中受益,而无需违反本体独立性的原则。我们的实验表明,可以在没有手动跨度标签的情况下训练提取的DST模型。我们的体系结构和培训策略提高了对样本稀疏,新概念和主题的鲁棒性,从而在一系列基准中提高了最先进的表现。我们进一步强调了我们的模型有效地从非拨号数据中学习的能力。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
医学对话生成是一项重要但具有挑战性的任务。以前的大多数作品都依赖于注意力机制和大规模预处理的语言模型。但是,这些方法通常无法从长时间的对话历史中获取关键信息,从而产生准确和信息丰富的响应,因为医疗实体通常散布在多种话语中以及它们之间的复杂关系。为了减轻此问题,我们提出了一个具有关键信息召回(Medpir)的医疗响应生成模型,该模型建立在两个组件上,即知识吸引的对话图形编码器和召回增强的生成器。知识吸引的对话图编码器通过利用话语中的实体之间的知识关系,并使用图形注意力网络对话图来构建对话图。然后,召回增强的发电机通过在产生实际响应之前生成对话的摘要来增强这些关键信息的使用。两个大型医学对话数据集的实验结果表明,Medpir在BLEU分数和医疗实体F1度量中的表现优于强大的基准。
translated by 谷歌翻译
Conversational text-to-SQL is designed to translate multi-turn natural language questions into their corresponding SQL queries. Most state-of-the-art conversational text- to-SQL methods are incompatible with generative pre-trained language models (PLMs), such as T5. In this paper, we present a two-stage unified MultI-task Generation frAmework (MIGA) that leverages PLMs' ability to tackle conversational text-to-SQL. In the pre-training stage, MIGA first decomposes the main task into several related sub-tasks and then unifies them into the same sequence-to-sequence (Seq2Seq) paradigm with task-specific natural language prompts to boost the main task from multi-task training. Later in the fine-tuning stage, we propose four SQL perturbations to alleviate the error propagation problem. MIGA tends to achieve state-of-the-art performance on two benchmarks (SparC and CoSQL). We also provide extensive analyses and discussions to shed light on some new perspectives for conversational text-to-SQL.
translated by 谷歌翻译
机器学习研究文献中,人们对自动症状检测(ASD)和自动诊断(AD)系统的兴趣迅速增强,旨在帮助远程医疗服务的医生。这些系统旨在与患者相互作用,收集有关其症状和相关前因的证据,并可能对潜在疾病做出预测。医生将审查互动,包括证据和预测,如有必要,请在确定下一步之前从患者那里收集其他信息。尽管该领域最近取得了进展,但这些系统的设计中缺少重要的医生与患者的互动,即鉴别诊断。它的缺席很大程度上是由于缺乏包含此类信息供模型进行训练的数据集。在这项工作中,我们为每个患者提供了一个大约130万患者的大规模合成数据集,其中包括鉴别诊断以及地面真理病理,症状和前因。与仅包含二进制症状和先例的现有数据集不同,该数据集还包含分类和多选择症状以及对有效数据收集有用的先决条件。此外,某些症状是在层次结构中组织的,使设计系统可以以逻辑方式与患者互动。作为概念验证,我们扩展了两个现有的AD和ASD系统以结合差异诊断,并提供了经验证据,表明将差异作为训练信号对于此类系统的效率至关重要。该数据集可在\ href {https://figshare.com/articles/dataset/ddxplus_dataset/20043374} {https://figshare.com/articles/articles/articles/dataaset/ddxplus/ddxplus/ddxplus/ddataset/2004343434343433344}。
translated by 谷歌翻译
缺乏外部知识使同志对话系统难以察觉隐含的情绪,并从有限的对话历史上学习情绪相互作用。为了解决上述问题,我们建议利用外部知识,包括致命知识和情绪词汇知识,以明确了解和表达在同情对话中的情绪。我们首先通过与外部知识共同互动并构建情感语境图来丰富对话史。然后,我们从知识丰富的情绪上下文图和蒸馏情绪信号中学习情绪背景陈述,这是在反应中表达的谓词情绪的先决条件。最后,为了产生同志反应,我们提出了一种情绪跨关注机制来从情绪上下文图中学习情绪依赖。在基准数据集上进行的广泛实验验证了该方法的有效性。此外,我们发现通过与正交工作的预先训练的模型集成,可以进一步提高我们的方法的性能。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
自动放射学报告生成在诊所至关重要,可以缓解来自繁重的工作量的经验丰富的放射科医师,并提醒缺乏误诊或错过诊断的缺乏经验的放射科学家。现有方法主要将放射学报告生成作为图像标题任务,采用编码器解码器框架。但是,在医学领域,这种纯数据驱动方法遭受以下问题:1)视觉和文本偏差问题; 2)缺乏专家知识。在本文中,我们提出了一种知识增强的放射学报告生成方法,介绍了两种类型的医学知识:1)一般知识,这是输入的独立知识,并为报告生成提供了广泛的知识; 2)特定知识,其输入依赖并为报告生成提供了细粒度的知识。为了充分利用一般和具体知识,我们还提出了一种知识增强的多主题注意机制。通过利用一般知识和特定知识来利用放射线图像的视觉特征,所提出的模型可以提高所生成的报告的质量。两种公共数据集IU-X射线和模拟CXR的实验结果表明,所提出的知识增强方法优于基于最先进的图像标题的方法。消融研究还表明,一般和具体知识都可以有助于提高放射学报告生成的表现。
translated by 谷歌翻译
Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data -- examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
translated by 谷歌翻译