The lack of efficient segmentation methods and fully-labeled datasets limits the comprehensive assessment of optical coherence tomography angiography (OCTA) microstructures like retinal vessel network (RVN) and foveal avascular zone (FAZ), which are of great value in ophthalmic and systematic diseases evaluation. Here, we introduce an innovative OCTA microstructure segmentation network (OMSN) by combining an encoder-decoder-based architecture with multi-scale skip connections and the split-attention-based residual network ResNeSt, paying specific attention to OCTA microstructural features while facilitating better model convergence and feature representations. The proposed OMSN achieves excellent single/multi-task performances for RVN or/and FAZ segmentation. Especially, the evaluation metrics on multi-task models outperform single-task models on the same dataset. On this basis, a fully annotated retinal OCTA segmentation (FAROS) dataset is constructed semi-automatically, filling the vacancy of a pixel-level fully-labeled OCTA dataset. OMSN multi-task segmentation model retrained with FAROS further certifies its outstanding accuracy for simultaneous RVN and FAZ segmentation.
translated by 谷歌翻译
Dense prediction tasks such as segmentation and detection of pathological entities hold crucial clinical value in the digital pathology workflow. However, obtaining dense annotations on large cohorts is usually tedious and expensive. Contrastive learning (CL) is thus often employed to leverage large volumes of unlabeled data to pre-train the backbone network. To boost CL for dense prediction, some studies have proposed variations of dense matching objectives in pre-training. However, our analysis shows that employing existing dense matching strategies on histopathology images enforces invariance among incorrect pairs of dense features and, thus, is imprecise. To address this, we propose a precise location-based matching mechanism that utilizes the overlapping information between geometric transformations to precisely match regions in two augmentations. Extensive experiments on two pretraining datasets (TCGA-BRCA, NCT-CRC-HE) and three downstream datasets (GlaS, CRAG, BCSS) highlight the superiority of our method in semantic and instance segmentation tasks. Our method outperforms previous dense matching methods by up to 7.2 % in average precision for detection and 5.6 % in average precision for instance segmentation tasks. Additionally, by using our matching mechanism in the three popular contrastive learning frameworks, MoCo-v2, VICRegL and ConCL, the average precision in detection is improved by 0.7 % to 5.2 % and the average precision in segmentation is improved by 0.7 % to 4.0 %, demonstrating its generalizability.
translated by 谷歌翻译
The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them. While existing benchmarks measure the correlation with human judgements of faithfulness on model-generated summaries, they are insufficient for diagnosing whether metrics are: 1) consistent, i.e., decrease as errors are introduced into a summary, 2) effective on human-written texts, and 3) sensitive to different error types (as summaries can contain multiple errors). To address these needs, we present a benchmark of unfaithful minimal pairs (BUMP), a dataset of 889 human-written, minimally different summary pairs, where a single error (from an ontology of 7 types) is introduced to a summary from the CNN/DailyMail dataset to produce an unfaithful summary. We find BUMP complements existing benchmarks in a number of ways: 1) the summaries in BUMP are harder to discriminate and less probable under SOTA summarization models, 2) BUMP enables measuring the consistency of metrics, and reveals that the most discriminative metrics tend not to be the most consistent, 3) BUMP enables the measurement of metrics' performance on individual error types and highlights areas of weakness for future work.
translated by 谷歌翻译
To address the non-negativity dropout problem of quaternion models, a novel quasi non-negative quaternion matrix factorization (QNQMF) model is presented for color image processing. To implement QNQMF, the quaternion projected gradient algorithm and the quaternion alternating direction method of multipliers are proposed via formulating QNQMF as the non-convex constraint quaternion optimization problems. Some properties of the proposed algorithms are studied. The numerical experiments on the color image reconstruction show that these algorithms encoded on the quaternion perform better than these algorithms encoded on the red, green and blue channels. Furthermore, we apply the proposed algorithms to the color face recognition. Numerical results indicate that the accuracy rate of face recognition on the quaternion model is better than on the red, green and blue channels of color image as well as single channel of gray level images for the same data, when large facial expressions and shooting angle variations are presented.
translated by 谷歌翻译
Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance compared to other benchmarks, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.
translated by 谷歌翻译
对成对比较的排名聚集在选举,体育比赛,建议和信息检索中表现出了令人鼓舞的结果。但是,与众多有关计算和统计特征的研究工作相反,对这种算法的安全问题几乎没有关注。在巨额利润的推动下,潜在的对手具有强大的动力和动力来操纵排名清单。同时,文献中没有很好地研究等级聚集方法的内在脆弱性。为了充分了解可能的风险,我们专注于有目的的对手,他们希望通过修改本文中的成对数据来指定汇总结果。从动力学系统的角度来看,具有目标排名列表的攻击行为是属于对手和受害者组成的固定点。为了执行目标攻击,我们将对手和受害者之间的相互作用作为游戏理论框架,由两个连续的操作员组成,同时建立了NASH平衡。然后,构建了针对Hodgerank和RankCentrality的两个程序,以产生原始数据的修改。此外,我们证明,一旦对手掌握了完整的信息,受害者将产生目标排名列表。值得注意的是,所提出的方法允许对手只保留不完整的信息或不完美的反馈并执行有目的的攻击。一系列玩具模拟和几个现实世界数据实验证明了建议的目标攻击策略的有效性。这些实验结果表明,所提出的方法可以实现攻击者的目标,即扰动排名列表的领先候选人是对手指定的。
translated by 谷歌翻译
夜间场景解析(NTSP)对于许多视觉应用是必不可少的,尤其是对于自动驾驶。大多数现有方法都是为了解析白天的现有方法。他们依靠在照明下建模基于像素强度的空间上下文线索。因此,这些方法在夜间场景中表现不佳,因为这种空间上下文提示被埋葬在夜间场景中的过度/暴露区域中。在本文中,我们首先进行了基于图像频率的统计实验来解释白天和夜间场景差异。我们发现,在白天和夜间场景之间,图像频率分布有很大差异,并且了解此类频率分布对于NTSP问题至关重要。基于此,我们建议利用图像频率分布来解析夜间场景。首先,我们提出了一个可学习的频率编码器(LFE),以模拟不同频率系数之间的关系,以动态测量所有频率组件。其次,我们提出了一个空间频率融合模块(SFF),该模块融合了空间和频率信息,以指导空间上下文特征的提取。广泛的实验表明,我们的方法对夜总会,夜城+和BDD100K晚数据集的最先进方法表现出色。此外,我们证明我们的方法可以应用于现有的白天场景解析方法,并在夜间场景中提高其性能。
translated by 谷歌翻译
在这项工作中,引入了SVEIDR模型及其变体(老年,疫苗接种模型),以编码不同年龄段和疫苗接种状态的社会接触影响。然后,我们在模拟和现实世界数据上实现了物理信息的神经网络。本文显示了包括从神经网络中学到的COVID-19的传播和预测分析的结果。
translated by 谷歌翻译
快速对抗训练(脂肪)有效地提高了标准对抗训练(SAT)的效率。然而,初始脂肪遇到灾难性的过度拟合,即,对抗性攻击的稳健精度突然并大大减少。尽管有几种脂肪变体毫不费力地防止过度拟合,但他们牺牲了很多计算成本。在本文中,我们探讨了SAT和FAT的训练过程之间的差异,并观察到,对抗性实例(AES)脂肪的攻击成功率在后期训练阶段逐渐变得更糟,从而导致过度拟合。 AE是通过零或随机初始化的快速梯度标志方法(FGSM)生成的。根据观察结果,我们提出了一种先前的FGSM初始化方法,以避免在研究多种初始化策略后避免过度适应,从而在整个训练过程中提高了AE的质量。初始化是通过利用历史上生成的AE而没有额外计算成本而形成的。我们进一步为提出的初始化方法提供了理论分析。我们还基于先前的初始化,即当前生成的扰动不应过多地偏离先前引导的初始化,因此我们还提出了一个简单而有效的正规化程序。正常化器同时采用历史和当前的对抗性扰动来指导模型学习。在四个数据集上进行的评估表明,所提出的方法可以防止灾难性过度拟合和优于最先进的脂肪方法。该代码在https://github.com/jiaxiaojunqaq/fgsm-pgi上发布。
translated by 谷歌翻译
组织病理学全幻灯片图像(WSIS)在临床研究中起着非常重要的作用,并作为许多癌症诊断的黄金标准。但是,由于其巨大尺寸,生成用于处理WSIS的自动工具是具有挑战性的。当前,为了解决这个问题,传统方法依靠多个实例学习(MIL)策略来处理贴剂级别的WSI。尽管有效,但这种方法在计算上很昂贵,因为将WSI整理成斑块需要时间,并且不探索这些瓷砖之间的空间关系。为了解决这些限制,我们提出了一个本地监督的学习框架,该框架通过探索包含的整个本地和全球信息来处理整个幻灯片。该框架将预训练的网络划分为几个模块,并使用辅助模型在本地优化每个模块。我们还引入了一个随机特征重建单元(RFR),以在训练过程中保留区分特征,并将方法的性能提高1%至3%。对三个公开可用的WSI数据集进行了广泛的实验:TCGA-NSCLC,TCGA-RCC和LKS,突出了我们方法在不同分类任务上的优越性。我们的方法的准确性优于最先进的MIL方法,而高7至10倍。此外,将其分为八个模块时,我们的方法需要端到端培训所需的GPU总内存总数的20%。我们的代码可从https://github.com/cvlab-stonybrook/local_learning_wsi获得。
translated by 谷歌翻译