Significant progress has been made in learning image classification neural networks under long-tail data distribution using robust training algorithms such as data re-sampling, re-weighting, and margin adjustment. Those methods, however, ignore the impact of data imbalance on feature normalization. The dominance of majority classes (head classes) in estimating statistics and affine parameters causes internal covariate shifts within less-frequent categories to be overlooked. To alleviate this challenge, we propose a compound batch normalization method based on a Gaussian mixture. It can model the feature space more comprehensively and reduce the dominance of head classes. In addition, a moving average-based expectation maximization (EM) algorithm is employed to estimate the statistical parameters of multiple Gaussian distributions. However, the EM algorithm is sensitive to initialization and can easily become stuck in local minima where the multiple Gaussian components continue to focus on majority classes. To tackle this issue, we developed a dual-path learning framework that employs class-aware split feature normalization to diversify the estimated Gaussian distributions, allowing the Gaussian components to fit with training samples of less-frequent classes more comprehensively. Extensive experiments on commonly used datasets demonstrated that the proposed method outperforms existing methods on long-tailed image classification.
translated by 谷歌翻译
Pre-trained language models allowed us to process downstream tasks with the help of fine-tuning, which aids the model to achieve fairly high accuracy in various Natural Language Processing (NLP) tasks. Such easily-downloaded language models from various websites empowered the public users as well as some major institutions to give a momentum to their real-life application. However, it was recently proven that models become extremely vulnerable when they are backdoor attacked with trigger-inserted poisoned datasets by malicious users. The attackers then redistribute the victim models to the public to attract other users to use them, where the models tend to misclassify when certain triggers are detected within the training sample. In this paper, we will introduce a novel improved textual backdoor defense method, named MSDT, that outperforms the current existing defensive algorithms in specific datasets. The experimental results illustrate that our method can be effective and constructive in terms of defending against backdoor attack in text domain. Code is available at https://github.com/jcroh0508/MSDT.
translated by 谷歌翻译
应付嘈杂标签的大多数现有方法通常假定类别分布良好,因此无法应对训练样本不平衡分布的实际情况的能力不足。为此,本文尽早努力通过长尾分配和标签噪声来解决图像分类任务。在这种情况下,现有的噪声学习方法无法正常工作,因为将噪声样本与干净的尾巴类别的样本区分开来是具有挑战性的。为了解决这个问题,我们提出了一个新的学习范式,基于对弱数据和强数据扩展的推论,以筛选嘈杂的样本,并引入休假散布的正则化,以消除公认的嘈杂样本的效果。此外,我们基于在线先验分布中纳入了一种新颖的预测惩罚,以避免对头等阶层的偏见。与现有的长尾分类方法相比,这种机制在实时捕获班级拟合度方面具有优越性。详尽的实验表明,所提出的方法优于解决噪声标签下长尾分类中分布不平衡问题的最先进算法。
translated by 谷歌翻译
In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience. In this article, we propose two frameworks to support automatic medical consultation, namely doctor-patient dialogue understanding and task-oriented interaction. We create a new large medical dialogue dataset with multi-level finegrained annotations and establish five independent tasks, including named entity recognition, dialogue act classification, symptom label inference, medical report generation and diagnosis-oriented dialogue policy. We report a set of benchmark results for each task, which shows the usability of the dataset and sets a baseline for future studies. Both code and data is available from https://github.com/lemuria-wchen/imcs21.
translated by 谷歌翻译
改善磁共振(MR)图像数据的分辨率对于计算机辅助诊断和大脑功能分析至关重要。更高的分辨率有助于捕获更详细的内容,但通常会导致较低的信噪比和更长的扫描时间。为此,MR Image超级分辨率已成为近期广泛利益的主题。现有作品建立了广泛的深层模型,该模型具有基于卷积神经网络(CNN)的常规体系结构。在这项工作中,为了进一步推进该研究领域,我们尽早努力建立一个基于变压器的MR图像超分辨率框架,并仔细设计了探索有价值的领域的先验知识。具体而言,我们考虑了包括高频结构的两倍领域先验和模式间环境,并建立了一种新颖的变压器体系结构,称为跨模式高频变压器(COHF-T),以将此类先验引入超分辨率(LR)MR图像的超级分辨。两个数据集的实验表明COHF-T可以实现新的最新性能。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
最近,通过深度学习框架提取动态系统的数据驱动法则在各个领域都引起了很多关注。此外,越来越多的研究工作倾向于将确定性动力学系统转移到随机动力学系统上,尤其是由非高斯乘法噪声驱动的系统。但是,对于高斯病例,许多基于原木样式的算法不能直接扩展到非高斯场景,这些场景可能存在很高的错误和低收敛问题。在这项工作中,我们克服了其中的一些挑战,并确定由$ \ alpha $稳定的l \'evy噪声驱动的随机动力系统,仅来自随机的成对数据。我们的创新包括:(1)设计一种深度学习方法,以学习l \'evy诱发的噪声的漂移和扩散系数,并在所有值中使用$ \ alpha $,(2)学习复杂的乘法噪声,而无需限制小噪声强度,(( 3)在一般输入数据假设下,即随机系统识别的端到端完整框架,即$ \ alpha $稳定的随机变量。最后,数值实验和与非本地KRAMERS-MOYAL公式与力矩生成功能的比较证实了我们方法的有效性。
translated by 谷歌翻译
当前弱监督的语义分割(WSSS)框架通常包含分离的掩模 - 细化模型和主要语义区域挖掘模型。这些方法将包含冗余特征提取骨干网和偏置的学习目标,使其计算复杂但是解决WSSS任务的子最优。为了解决这个问题,本文建立了一个紧凑的学习框架,将分类和掩码精细组件嵌入统一的深层模型。通过共享特征提取骨干通,我们的模型能够促进两个组件之间的知识共享,同时保留低计算复杂性。为了鼓励高质量的知识互动,我们提出了一种新颖的替代自我双重教学(ASDT)机制。与传统蒸馏策略不同,我们模型中的两个教师分支的知识通过脉冲宽度调制(PWM)替代地蒸馏到学生分支,该脉冲宽度调制(PWM)产生PW波形选择信号以引导知识蒸馏过程。通过这种方式,学生分支可以帮助阻止模型落入由教师分支提供的不完美知识引起的局部最低解决方案。 Pascal VOC的综合实验2012和Coco-Stuff 10K展示了拟议的替代自我双重教学机制的有效性以及我们方法的新的最新性能。
translated by 谷歌翻译
少量学习(FSL)旨在学习概括到具有有限培训样本的小型课程的模型。最近的作品将FSL推进一个场景,其中还提供了未标记的例子并提出半监督FSL方法。另一种方法还关心基类的性能,除了新颖的外,还建立了增量FSL方案。在本文中,我们在更现实但复杂的环境下概括了上述两个,通过半监督增量少量学习(S2 I-FSL)命名。为了解决任务,我们提出了一种包含两部分的新型范例:(1)一种精心设计的元训练算法,用于减轻由不可靠的伪标签和(2)模型适应机制来减轻基础和新颖类之间的模糊性,以学习歧视特征对于小说类,同时使用少数标记和所有未标记的数据保留基本知识。对标准FSL,半监控FSL,增量FSL的广泛实验,以及第一个构建的S2 I-FSL基准测试证明了我们提出的方法的有效性。
translated by 谷歌翻译
机器学习已开始在许多应用中发挥核心作用。这些应用程序中的许多应用程序通常还涉及由于设计约束(例如多元系统)或计算/隐私原因(例如,在智能手机数据上学习),这些数据集分布在多个计算设备/机器上。这样的应用程序通常需要以分散的方式执行学习任务,其中没有直接连接到所有节点的中央服务器。在现实世界中的分散设置中,由于设备故障,网络攻击等,节点容易出现未发现的故障,这可能会崩溃非稳固的学习算法。本文的重点是在发生拜占庭失败的节点的存在下对分散学习的鲁棒化。拜占庭故障模型允许故障节点任意偏离其预期行为,从而确保设计最健壮的算法的设计。但是,与分布式学习相反,对分散学习中拜占庭式的弹性的研究仍处于起步阶段。特别是,现有的拜占庭式分散学习方法要么不能很好地扩展到大规模的机器学习模型,要么缺乏统计收敛性可确保有助于表征其概括错误。在本文中,引入了一个可扩展的,拜占庭式的分散的机器学习框架,称为拜占庭的分散梯度下降(桥梁)。本文中还提供了强烈凸出问题和一类非凸问题的算法和统计收敛保证。此外,使用大规模的分散学习实验来确定桥梁框架是可扩展的,并且为拜占庭式弹性凸和非convex学习提供了竞争结果。
translated by 谷歌翻译