快速的现场评估(ROSE)技术可以通过适当地分析快速染色的细胞病理学图像来显着加速胰腺癌的诊断。计算机辅助诊断(CAD)可以潜在地解决玫瑰病中病理学家的短缺。但是,不同样品之间的癌性模式差异很大,这使CAD任务极具挑战性。此外,由于不同的染色质量和各种采集装置类型,玫瑰图像在颜色分布,亮度和对比度方面具有复杂的扰动。为了应对这些挑战,我们提出了一种基于随机实例的视觉变压器(SI-VIT)方法,该方法可以减少扰动并增强实例之间的建模。借助重新组装的洗牌实例及其行李级软标签,该方法利用回归头将模型集中在细胞上,而不是各种扰动。同时,该模型与分类头结合在一起,可以有效地识别不同实例之间的一般分布模式。结果表明,分类准确性有了更准确的注意区域的显着提高,表明玫瑰图像的多种模式有效地提取了,并且复杂的扰动大大降低。这也表明SI-VIT在分析细胞病理学图像方面具有巨大的潜力。代码和实验结果可在https://github.com/sagizty/mil-si上获得。
translated by 谷歌翻译
联合学习(FL)可用于通过使多个机构协作,改善磁共振(MR)图像重建的数据隐私和效率,而无需聚合本地数据。然而,由不同MR成像协议引起的域移位可以显着降低FL模型的性能。最近的流程倾向于通过增强全局模型的概括来解决这一点,但它们忽略了特定于域的特征,这可能包含有关设备属性的重要信息,并且对本地重建有用。在本文中,我们提出了一种针对MR图像重建(FEDMRI)的特异性保存流算法。核心思想是将MR重建模型划分为两个部分:全局共享编码器,以在全局级别获取概括的表示,以及客户特定的解码器,以保留每个客户端的特定于域的属性,这对于协作很重要当客户具有独特的分发时重建。此外,为了进一步提高全局共享编码器的收敛,当存在域移位时,引入加权对比正规化以在优化期间直接校正客户端和服务器之间的任何偏差。广泛的实验表明,我们的Fedmri的重建结果是最接近多机构数据的地面真理,并且它优于最先进的FL方法。
translated by 谷歌翻译
在相应的辅助对比的指导下,目标对比度的超级分辨磁共振(MR)图像(提供了其他解剖信息)是快速MR成像的新解决方案。但是,当前的多对比超分辨率(SR)方法倾向于直接连接不同的对比度,从而忽略了它们在不同的线索中的关系,例如在高强度和低强度区域中。在这项研究中,我们提出了一个可分离的注意网络(包括高强度的优先注意力和低强度分离注意力),名为SANET。我们的卫生网可以借助辅助对比度探索“正向”和“反向”方向中高强度和低强度区域的区域,同时学习目标对比MR的SR的更清晰的解剖结构和边缘信息图片。 SANET提供了三个吸引人的好处:(1)这是第一个探索可分离的注意机制的模型,该机制使用辅助对比来预测高强度和低强度区域,将更多的注意力转移到精炼这些区域和这些区域之间的任何不确定细节和纠正重建结果中的细小区域。 (2)提出了一个多阶段集成模块,以学习多个阶段的多对比度融合的响应,获得融合表示之间的依赖性,并提高其表示能力。 (3)在FastMRI和Clinical \ textit {in Vivo}数据集上进行了各种最先进的多对比度SR方法的广泛实验,证明了我们模型的优势。
translated by 谷歌翻译
磁共振成像(MRI)的核心问题是加速度和图像质量之间的折衷。图像重建和超分辨率是磁共振成像(MRI)中的两个重要技术。目前的方法旨在单独执行这些任务,忽略它们之间的相关性。在这项工作中,我们为联合MRI重建和超分辨率提出了一个端到端的任务变压器网络(T $ ^ 2 $ net),它允许在多项任务之间共享表示和特征传输以实现更高质量的,来自高度遮盖率和退化的MRI数据的无序和运动伪影的图像。我们的框架与重建和超分辨率相结合,分为两个子分支,其功能表示为查询和键。具体地,我们鼓励两个任务之间的联合特征学习,从而传输准确的任务信息。我们首先使用两个单独的CNN分支来提取特定于任务的功能。然后,任务变压器模块旨在嵌入和综合两个任务之间的相关性。实验结果表明,我们的多任务模型显着优于高级顺序方法,包括定量和定性。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译