评估患者结直肠癌的微卫星稳定性状态对于个性化治疗方案至关重要。最近,提出了卷积 - 神经网络(CNN)与转移学习方法结合使用,以规避传统的实验室测试,以确定苏木精和曙红染色的活检全幻灯片图像(WSI)的微卫星状态。但是,WSI的高分辨率实际上阻止了整个WSI的直接分类。当前方法通过先对WSI提取的小斑块进行分类,然后汇总补丁级分类徽标来推断患者级状态,从而绕过WSI高分辨率。这种方法限制了捕获位于高分辨率WSI数据的重要信息的能力。我们引入了一种有效的方法,通过对贴片嵌入的动量学习以及在这些嵌入组的组上培训患者级分类器,以利用WSI高分辨率信息。与直接的补丁级分类和患者水平聚合方法相比,我们的方法的准确性高达7.4 \%(AUC,$ 0.91 \ pm 0.01 $ vs. $ 0.85 \ $ 0.85 \ pm 0.04 $,p Value $ <0.01 $ )。我们的代码可以在https://github.com/technioncomputationalmrilab/coleroctal_cancer_ai上找到。
translated by 谷歌翻译
In this paper, we present a modified Xception architecture, the NEXcepTion network. Our network has significantly better performance than the original Xception, achieving top-1 accuracy of 81.5% on the ImageNet validation dataset (an improvement of 2.5%) as well as a 28% higher throughput. Another variant of our model, NEXcepTion-TP, reaches 81.8% top-1 accuracy, similar to ConvNeXt (82.1%), while having a 27% higher throughput. Our model is the result of applying improved training procedures and new design decisions combined with an application of Neural Architecture Search (NAS) on a smaller dataset. These findings call for revisiting older architectures and reassessing their potential when combined with the latest enhancements.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译