Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document. Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text instead of building accurate and robust systems for iterative text revision. In this work, we aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans (where-to-edit) with their corresponding edit intents and then instructing a revision model to revise the detected edit spans. Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations. Our system significantly outperforms previous baselines on our text revision tasks and other standard text revision tasks, including grammatical error correction, text simplification, sentence fusion, and style transfer. Through extensive qualitative and quantitative analysis, we make vital connections between edit intentions and writing quality, and better computational modeling of iterative text revisions.
translated by 谷歌翻译
修订是人类写作过程的重要组成部分。它往往是战略性的,适应性的,更重要的是迭代性质。尽管大型语言模型在文本修订任务上取得了成功,但它们仅限于非著作,单次修订。研究和评估大语言模型进行连续修订和与人类作家合作的能力是建立有效写作助手的关键一步。在这项工作中,我们提出了一个人类的迭代文本修订系统,阅读,修订,重复(R3),旨在通过阅读模型生成的修订和用户反馈,以最少的人为努力来实现高质量的文本修订,修改文件,重复人机相互作用。在R3中,文本修订模型为人类作家提供了文本编辑建议,他们可以接受或拒绝建议的编辑。然后将所接受的编辑纳入模型,以进行下次文档修订版。因此,作家可以通过与系统进行交互并仅接受/拒绝其建议的编辑来修改文档,直到文本修订模型停止进行进一步修订或达到预定义的最大修订数量。经验实验表明,R3可以在早期的修订深度与人类作家进行可比的接受率进行修订,并且人机相互作用可以通过更少的迭代和编辑来获得更高质量的修订。收集的人类模型交互数据集和系统代码可在\ url {https://github.com/vipulrraheja/iterater}中获得。我们的系统演示可在\ url {https://youtu.be/lk08tipeoae}上获得。
translated by 谷歌翻译
In robotics and computer vision communities, extensive studies have been widely conducted regarding surveillance tasks, including human detection, tracking, and motion recognition with a camera. Additionally, deep learning algorithms are widely utilized in the aforementioned tasks as in other computer vision tasks. Existing public datasets are insufficient to develop learning-based methods that handle various surveillance for outdoor and extreme situations such as harsh weather and low illuminance conditions. Therefore, we introduce a new large-scale outdoor surveillance dataset named eXtremely large-scale Multi-modAl Sensor dataset (X-MAS) containing more than 500,000 image pairs and the first-person view data annotated by well-trained annotators. Moreover, a single pair contains multi-modal data (e.g. an IR image, an RGB image, a thermal image, a depth image, and a LiDAR scan). This is the first large-scale first-person view outdoor multi-modal dataset focusing on surveillance tasks to the best of our knowledge. We present an overview of the proposed dataset with statistics and present methods of exploiting our dataset with deep learning-based algorithms. The latest information on the dataset and our study are available at https://github.com/lge-robot-navi, and the dataset will be available for download through a server.
translated by 谷歌翻译
The cone-beam computed tomography (CBCT) provides 3D volumetric imaging of a target with low radiation dose and cost compared with conventional computed tomography, and it is widely used in the detection of paranasal sinus disease. However, it lacks the sensitivity to detect soft tissue lesions owing to reconstruction constraints. Consequently, only physicians with expertise in CBCT reading can distinguish between inherent artifacts or noise and diseases, restricting the use of this imaging modality. The development of artificial intelligence (AI)-based computer-aided diagnosis methods for CBCT to overcome the shortage of experienced physicians has attracted substantial attention. However, advanced AI-based diagnosis addressing intrinsic noise in CBCT has not been devised, discouraging the practical use of AI solutions for CBCT. To address this issue, we propose an AI-based computer-aided diagnosis method using CBCT with a denoising module. This module is implemented before diagnosis to reconstruct the internal ground-truth full-dose scan corresponding to an input CBCT image and thereby improve the diagnostic performance. The external validation results for the unified diagnosis of sinus fungal ball, chronic rhinosinusitis, and normal cases show that the proposed method improves the micro-, macro-average AUC, and accuracy by 7.4, 5.6, and 9.6% (from 86.2, 87.0, and 73.4 to 93.6, 92.6, and 83.0%), respectively, compared with a baseline while improving human diagnosis accuracy by 11% (from 71.7 to 83.0%), demonstrating technical differentiation and clinical effectiveness. This pioneering study on AI-based diagnosis using CBCT indicates denoising can improve diagnostic performance and reader interpretability in images from the sinonasal area, thereby providing a new approach and direction to radiographic image reconstruction regarding the development of AI-based diagnostic solutions.
translated by 谷歌翻译
Accurately extracting driving events is the way to maximize computational efficiency and anomaly detection performance in the tire frictional nose-based anomaly detection task. This study proposes a concise and highly useful method for improving the precision of the event extraction that is hindered by extra noise such as wind noise, which is difficult to characterize clearly due to its randomness. The core of the proposed method is based on the identification of the road friction sound corresponding to the frequency of interest and removing the opposite characteristics with several frequency filters. Our method enables precision maximization of driving event extraction while improving anomaly detection performance by an average of 8.506%. Therefore, we conclude our method is a practical solution suitable for road surface anomaly detection purposes in outdoor edge computing environments.
translated by 谷歌翻译
大脑磁共振成像(MRI)扫描的自动分割和体积对于诊断帕金森氏病(PD)和帕金森氏症综合症(P-Plus)至关重要。为了提高诊断性能,我们在大脑分割中采用了深度学习(DL)模型,并将其性能与金标准的非DL方法进行了比较。我们收集了健康对照组(n = 105)和PD患者(n = 105),多个全身性萎缩(n = 132)和渐进性超核麻痹(n = 69)的大脑MRI扫描。 2020.使用金标准的非DL模型FreeSurfer(FS),我们对六个脑结构进行了分割:中脑,PON,CAUDATE,CAUDATE,PUTATATE,pALLIDUM和THIRD CNTRICLE,并将其视为DL模型的注释数据,代表性V -net和unet。计算了分化正常,PD和P-Plus病例的曲线下的骰子分数和面积。每位患者六个大脑结构的V-NET和UNETR的分割时间分别为3.48 +-0.17和48.14 +-0.97 s,比FS(15,735 +-1.07 s)快至少300倍。两种DL模型的骰子得分都足够高(> 0.85),它们的疾病分类AUC优于FS。为了分类正常与P-Plus和PD与多个全身性萎缩(小脑型)的分类,DL模型和FS显示出高于0.8的AUC。 DL显着减少了分析时间,而不会损害大脑分割和差异诊断的性能。我们的发现可能有助于在临床环境中采用DL脑MRI分割并提高大脑研究。
translated by 谷歌翻译
基于生成对抗网络(GAN-IT)的图像翻译是在胸部X射线图像(AL-CXR)中精确定位异常区域的一种有前途的方法。但是,异质的未配对数据集破坏了现有的方法来提取关键特征并将正常与异常情况区分开,从而导致不准确和不稳定的Al-CXR。为了解决这个问题,我们提出了涉及注册和数据增强的两阶段gan-it的改进。对于第一阶段,我们引入了一种可逆的基于学习的注册技术,该技术实际上和合理地将未配对的数据转换为配对数据以进行学习注册图。这种新颖的方法可实现高注册性能。在第二阶段,我们将数据扩展应用于均匀注册框架上的左右肺区域来多样化异常位置,从而通过减轻显示左和右肺病变的数据分布的不平衡来进一步改善性能。我们的方法旨在应用于现有的GAN-IT模型,从而使现有的体系结构受益于翻译的关键功能。通过证明应用AL-CXR的性能在应用提出的方法时均匀提高,我们认为即使学习数据稀缺,也可以在临床环境中部署Al-CXR的GAN-IT。
translated by 谷歌翻译
与其他标准摄像机相反,事件摄像机以完全不同的方式来解释世界。作为异步事件的集合。尽管事件摄像头的独特数据输出,但许多事件功能检测和跟踪算法通过绕开基于框架的数据表示表现出了重大进展。本文质疑这样做的需求,并提出了一种新颖的事件数据友好方法,该方法可以实现同时的特征检测和跟踪,称为基于事件聚类的检测和跟踪(ECDT)。我们的方法采用一种新颖的聚类方法,称为基于K-NN分类器的空间聚类和噪声应用程序(KCSCAN)的应用,用于聚类相邻的极性事件以检索事件轨迹。借助头部和尾部描述符匹配过程,事件群集,在不同的极性中重新出现,不断跟踪,从而拉长了功能轨道。由于我们在时空空间中的聚类方法,我们的方法可以自动求解功能检测和特征跟踪。此外,ECDT可以使用可调的时间窗口以任何频率提取功能轨道,这不会破坏原始事件数据的高时间分辨率。与最先进的方法相比,我们的方法可以达到30%的特征跟踪年龄,同时也具有与其大约等于其的低误差。
translated by 谷歌翻译
外围插入的中央导管(PICC)由于其长期的血管内渗透感具有低感染率,因此已被广泛用作代表性的中央静脉线(CVC)之一。但是,PICC的尖端错位频率很高,增加了刺穿,栓塞和心律不齐等并发症的风险。为了自动,精确地检测到它,使用最新的深度学习(DL)技术进行了各种尝试。但是,即使采用了这些方法,实际上仍然很难确定尖端位置,因为多个片段现象(MFP)发生在预测和提取PICC线之前预测尖端之前所需的PICC线的过程。这项研究旨在开发一种通常应用于现有模型的系统,并通过删除模型输出的MF来更准确地恢复PICC线路,从而精确地定位了检测其处置的实际尖端位置。为此,我们提出了一个基于多阶段DL的框架后处理,以后处理现有技术的PICC线提取结果。根据是否将MFCN应用于五个常规模型,将每个均方根误差(RMSE)和MFP发病率比较性能。在内部验证中,当将MFCN应用于现有单个模型时,MFP平均提高了45%。 RMSE从平均26.85mm(17.16至35.80mm)到9.72mm(9.37至10.98mm)的平均增长了63%以上。在外部验证中,当应用MFCN时,MFP的发病率平均下降32%,RMSE平均下降了65 \%。因此,通过应用提出的MFCN,我们观察到与现有模型相比,PICC尖端位置的显着/一致检测性能提高。
translated by 谷歌翻译
归纳转移学习旨在通过利用源任务中的预训练模型来从少量培训数据中学习目标任务。大多数涉及大规模深度学习模型的策略采用预先培训的模型和进行目标任务进行初始化。但是,当使用过度参数化模型时,我们通常可以在不牺牲源任务的准确性的情况下修剪模型。这促使我们采用模型修剪来通过深度学习模型进行转移学习。在本文中,我们提出了PAC-NET,这是一种简单而有效的方法,用于基于修剪的转移学习。 PAC-NET由三个步骤组成:修剪,分配和校准(PAC)。这些步骤背后的主要思想是确定源任务的基本权重,通过更新基本权重来微调源任务,然后通过更新剩余的冗余权重来校准目标任务。在各种广泛的感应转移学习实验集中,我们表明我们的方法通过很大的边距实现了最先进的性能。
translated by 谷歌翻译