Surgical robot automation has attracted increasing research interest over the past decade, expecting its huge potential to benefit surgeons, nurses and patients. Recently, the learning paradigm of embodied AI has demonstrated promising ability to learn good control policies for various complex tasks, where embodied AI simulators play an essential role to facilitate relevant researchers. However, existing open-sourced simulators for surgical robot are still not sufficiently supporting human interactions through physical input devices, which further limits effective investigations on how human demonstrations would affect policy learning. In this paper, we study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning. Specifically, we establish our platform based on our previously released SurRoL simulator with several new features co-developed to allow high-quality human interaction via an input device. With these, we further propose to collect human demonstrations and imitate the action patterns to achieve more effective policy learning. We showcase the improvement of our simulation environment with the designed new features and tasks, and validate state-of-the-art reinforcement learning algorithms using the interactive environment. Promising results are obtained, with which we hope to pave the way for future research on surgical embodied intelligence. Our platform is released and will be continuously updated in the website: https://med-air.github.io/SurRoL/
translated by 谷歌翻译
计算机辅助的微创手术在使现代经营剧院受益方面具有巨大的潜力。从内窥镜流传输的视频数据提供了丰富的信息,以支持下一代智能手术系统的上下文意识。为了在手术过程中获得准确的感知和自动操纵,基于学习的技术是一种有希望的方法,近年来可以实现先进的图像分析和场景理解。但是,学习此类模型高度依赖于大规模,高质量和多任务标签的数据。目前,这是该主题的瓶颈,因为可用的公共数据集在CAI领域仍然非常有限。在本文中,我们介绍并发布了第一个具有多个基于图像的感知任务的集成数据集(称为Autolaparo),以促进子宫切除术手术中的基于学习的自动化。我们的Autolaparo数据集是根据整个子宫切除术程序的全长视频开发的。具体而言,数据集中制定了三个不同但高度相关的任务,包括手术工作流识别,腹腔镜运动预测以及仪器和关键解剖学细分。此外,我们还提供了最先进模型的实验结果,作为参考基准,用于该数据集的进一步模型开发和评估。该数据集可从https://autolaparo.github.io获得。
translated by 谷歌翻译
内窥镜立体视频的机器人手术中软组织的重建对于许多应用非常重要,例如术中导航和图像引导的机器人手术自动化。此任务的先前工作主要依赖于基于SLAM的方法,这些方法难以处理复杂的手术场景。受神经渲染的最新进展的启发,我们提出了一个新颖的框架,用于在单视图设置下从机器人手术中的双眼捕获中进行可变形的组织重建。我们的框架采用动态神经辐射场,以表示MLP中的可变形外科手术场景,并以基于学习的方式优化形状和变形。除了非刚性变形外,从单个角度来看,工具阻塞和差的3D线索也是软组织重建的特殊挑战。为了克服这些困难,我们提出了一系列工具掩模引导的射线铸造,立体声深度提示射线行进和立体声深度避免优化的策略。通过关于Davinci机器人手术视频的实验,我们的方法显着优于处理各种复杂非刚性变形的当前最新重建方法。据我们所知,这是利用神经渲染的第一批作品,用于手术场景3D重建,具有显着的潜力。代码可在以下网址获得:https://github.com/med-air/endonerf。
translated by 谷歌翻译
主管机器人辅助外科医生的需求逐步扩大,因为由于其临床优势,机器人辅助手术已逐渐变得越来越受欢迎。为了满足这一需求并为外科医生提供更好的外科教育,我们通过整合人工智能外科模块和增强现实可视化来开发一种新的机器人外科教育系统。人工智能融入了强化倾向于从专家演示中学习,然后产生3D引导轨迹,提供完整外科手术的外科语境意识。轨迹信息在DVRK中的立体观看者中进一步可视化,以及诸如文本提示的其他信息,其中用户可以感知3D指导并学习过程。拟议的系统通过初步试验来评估外科教育任务挂接转移,这证明了其可行性和潜力作为下一代机器人辅助手术教育解决方案。
translated by 谷歌翻译
Recent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based ones, have been developed and applied widely to RS data analysis. The successful application of AI covers almost all aspects of Earth observation (EO) missions, from low-level vision tasks like super-resolution, denoising, and inpainting, to high-level vision tasks like scene classification, object detection, and semantic segmentation. While AI techniques enable researchers to observe and understand the Earth more accurately, the vulnerability and uncertainty of AI models deserve further attention, considering that many geoscience and RS tasks are highly safety-critical. This paper reviews the current development of AI security in the geoscience and RS field, covering the following five important aspects: adversarial attack, backdoor attack, federated learning, uncertainty, and explainability. Moreover, the potential opportunities and trends are discussed to provide insights for future research. To the best of the authors' knowledge, this paper is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community. Available code and datasets are also listed in the paper to move this vibrant field of research forward.
translated by 谷歌翻译
Cross-modality magnetic resonance (MR) image synthesis aims to produce missing modalities from existing ones. Currently, several methods based on deep neural networks have been developed using both source- and target-modalities in a supervised learning manner. However, it remains challenging to obtain a large amount of completely paired multi-modal training data, which inhibits the effectiveness of existing methods. In this paper, we propose a novel Self-supervised Learning-based Multi-scale Transformer Network (SLMT-Net) for cross-modality MR image synthesis, consisting of two stages, \ie, a pre-training stage and a fine-tuning stage. During the pre-training stage, we propose an Edge-preserving Masked AutoEncoder (Edge-MAE), which preserves the contextual and edge information by simultaneously conducting the image reconstruction and the edge generation. Besides, a patch-wise loss is proposed to treat the input patches differently regarding their reconstruction difficulty, by measuring the difference between the reconstructed image and the ground-truth. In this case, our Edge-MAE can fully leverage a large amount of unpaired multi-modal data to learn effective feature representations. During the fine-tuning stage, we present a Multi-scale Transformer U-Net (MT-UNet) to synthesize the target-modality images, in which a Dual-scale Selective Fusion (DSF) module is proposed to fully integrate multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Moreover, we use the pre-trained encoder as a feature consistency module to measure the difference between high-level features of the synthesized image and the ground truth one. Experimental results show the effectiveness of the proposed SLMT-Net, and our model can reliably synthesize high-quality images when the training set is partially unpaired. Our code will be publicly available at https://github.com/lyhkevin/SLMT-Net.
translated by 谷歌翻译
We study a novel and important communication pattern in large-scale model-parallel deep learning (DL), which we call cross-mesh resharding. This pattern emerges when the two paradigms of model parallelism - intra-operator and inter-operator parallelism - are combined to support large models on large clusters. In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh, on which the tensor may be distributed with the same or different layouts. We formalize this as a many-to-many multicast communication problem, and show that existing approaches either are sub-optimal or do not generalize to different network topologies or tensor layouts, which result from different model architectures and parallelism strategies. We then propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule. On microbenchmarks, our overall system outperforms existing ones by up to 10x across various tensor and mesh layouts. On end-to-end training of two large models, GPT-3 and U-Transformer, we improve throughput by 10% and 50%, respectively.
translated by 谷歌翻译
这里介绍了人工智能研究所(IARAI)组织的2022年Landslide4sense(L4S)竞赛的科学结果。竞争的目的是根据全球收集的卫星图像的大规模多个来源自动检测滑坡。 2022 L4S旨在促进有关使用卫星图像的语义分割任务的深度学习模型(DL)模型最新发展的跨学科研究。在过去的几年中,由于卷积神经网络(CNN)的发展,基于DL的模型已经达到了对图像解释的期望。本文的主要目的是介绍本次比赛中介绍的细节和表现最佳的算法。获胜的解决方案详细介绍了Swin Transformer,Segformer和U-NET等最先进的模型。还考虑了先进的机器学习技术和诸如硬采矿,自我培训和混合数据增强之类的策略。此外,我们描述了L4S基准数据集,以促进进一步的比较,并在线报告准确性评估的结果。可以在\ textIt {未来开发排行榜上访问数据,以供将来评估,\ url {https://www.iarai.ac.ac.at/landslide4sense/challenge/},并邀请研究人员提交更多预测结果,评估准确性在他们的方法中,将它们与其他用户的方法进行比较,理想情况下,改善了本文报告的滑坡检测结果。
translated by 谷歌翻译
基于文本描述的高分辨率遥感图像的合成在许多实际应用方案中具有巨大的潜力。尽管深度神经网络在许多重要的遥感任务中取得了巨大的成功,但是从文本描述中生成现实的遥感图像仍然非常困难。为了应对这一挑战,我们提出了一个新颖的文本形象现代霍普菲尔德网络(TXT2IMG-MHN)。 TXT2IMG-MHN的主要思想是在具有现代Hopfield层的文本和图像嵌入方式上进行层次原型学习。 TXT2IMG-MHN并没有直接学习具体但高度多样化的文本图像联合特征表示,而是旨在从文本图像嵌入中学习最具代表性的原型,从而实现一种粗略的学习策略。然后可以利用这些学到的原型来代表文本到图像生成任务中更复杂的语义。为了更好地评估生成图像的现实主义和语义一致性,我们使用对合成图像训练的分类模型对真实遥感数据进行零击分类。尽管它很简单,但我们发现,零弹性分类的总体准确性可以作为评估从文本生成图像的能力的良好指标。基准遥感文本图像数据集上的广泛实验表明,所提出的TXT2IMG-MHN比现有方法可以生成更现实的遥感图像。代码和预培训模型可在线获得(https://github.com/yonghaoxu/txt2img-mhn)。
translated by 谷歌翻译
求解部分微分方程(PDE)是物理,生物学和化学领域的重要研究手段。作为数值方法的近似替代方法,Pinn受到了广泛的关注,并在许多领域发挥了重要作用。但是,Pinn使用完全连接的网络作为其模型,在时间和空间中,其合适能力和有限的外推能力有限。在本文中,我们提出了用于求解图形神经网络基础的部分微分方程的phygnnet,该方程由编码器,处理器和解码器块组成。特别是,我们将计算区域划分为常规网格,在网格上定义部分差分运算符,然后构建PDE损失以使网络优化以构建Phygnnet模型。更重要的是,我们对汉堡方程和热方程式进行比较实验以验证我们的方法,结果表明,与PINN相比,我们的方法在时间和空间区域具有更好的拟合能力和外推能力。
translated by 谷歌翻译