Unit commitment (UC) are essential tools to transmission system operators for finding the most economical and feasible generation schedules and dispatch signals. Constraint screening has been receiving attention as it holds the promise for reducing a number of inactive or redundant constraints in the UC problem, so that the solution process of large scale UC problem can be accelerated by considering the reduced optimization problem. Standard constraint screening approach relies on optimizing over load and generations to find binding line flow constraints, yet the screening is conservative with a large percentage of constraints still reserved for the UC problem. In this paper, we propose a novel machine learning (ML) model to predict the most economical costs given load inputs. Such ML model bridges the cost perspectives of UC decisions to the optimization-based constraint screening model, and can screen out higher proportion of operational constraints. We verify the proposed method's performance on both sample-aware and sample-agnostic setting, and illustrate the proposed scheme can further reduce the computation time on a variety of setup for UC problems.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
头颈肿瘤分割挑战(Hecktor)2022为研究人员提供了一个平台,可以将其解决方案与3D CT和PET图像的肿瘤和淋巴结分割。在这项工作中,我们描述了针对Hecktor 2022分割任务的解决方案。我们将所有图像重新样本为共同的分辨率,在头颈部和颈部区域周围的作物,并从Monai训练Segresnet语义分割网络。我们使用5倍的交叉验证来选择最佳模型检查点。最终提交是3次运行中的15个型号的合奏。我们的解决方案(NVAUTO团队名称)以0.78802的汇总骰子得分在Hecktor22挑战排行榜上获得第一名。
translated by 谷歌翻译
颅内出血分割挑战(实例2022)为研究人员提供了一个平台,以将其解决方案与3D CTS的出血中风区域进行分割。在这项工作中,我们将解决方案描述为实例2022。我们使用2D分割网络,来自Monai的Segresnet,在不重采样的情况下操作切片。最终提交是18个模型的合奏。我们的解决方案(NVAUTO团队名称)在骰子度量标准(0.721)和总排名2方面获得了最高位置。
translated by 谷歌翻译
缺血性中风病变细分挑战(Isles 2022)为研究人员提供了一个平台,可以将其解决方案与3D MRI的缺血性中风区域进行比较。在这项工作中,我们描述了我们对2022分段任务的解决方案。我们将所有图像重新样本为一个共同的分辨率,使用两种输入MRI模式(DWI和ADC),并使用MONAI的Train Segresnet语义分割网络。最终提交是15个模型的合奏(来自3倍交叉验证的3次运行)。我们的解决方案(NVAUTO团队名称)在骰子度量标准(0.824)和总排名第2(基于合并的度量排名)方面获得了最高位置。
translated by 谷歌翻译
从大脑活动中解码图像一直是一个挑战。由于深度学习的发展,有可用的工具可以解决此问题。解码图像旨在将神经尖峰列车映射到低级视觉特征和高级语义信息空间。最近,有一些关于从尖峰列车解码的研究,但是,这些研究更少关注神经科学的基础,很少有研究将接受场合并为视觉图像重建。在本文中,我们提出了一种具有生物学特性的深度学习神经网络体系结构,以从尖峰火车中重建视觉图像。据我们所知,我们实施了一种将接收场属性矩阵集成到损失函数中的方法。我们的模型是从神经尖峰火车到图像的端到端解码器。我们不仅将Gabor过滤器合并到自动编码器中,该自动编码器用于生成图像,还提出了具有接收场特性的损失函数。我们在两个数据集上评估了我们的解码器,这些数据集包含猕猴的一级视觉皮层神经尖峰和sal虫视网膜神经节细胞(RGC)峰值。我们的结果表明,我们的方法可以有效地结合感受的特征以重建图像,从而根据神经信息提供一种新的视觉重建方法。
translated by 谷歌翻译
由于物体形状和图案(例如器官或肿瘤)的高可变性,3D医学图像的语义分割是一个具有挑战性的任务。鉴于最近在医学图像分割中深入学习的成功,已经引入了神经结构搜索(NAS)以查找高性能3D分段网络架构。但是,由于3D数据的大量计算要求和架构搜索的离散优化性质,之前的NAS方法需要很长的搜索时间或必要的连续放松,并且通常导致次优网络架构。虽然单次NAS可能会解决这些缺点,但其在分段域中的应用尚未在膨胀的多尺度多路径搜索空间中进行很好地研究。为了为医学图像分割启用一次性NAS,我们的方法名为Hypersegnas,介绍了通过结合建筑拓扑信息来帮助超级培训培训。在培训超级网络培训并在架构搜索期间引入开销时,可以删除这种超空头。我们表明,与以前的最先进的(SOTA)分割网络相比,Hypersegnas产生更好的表现和更直观的架构;此外,它可以在不同的计算限制下快速准确地找到良好的体系结构候选者。我们的方法是在医疗细分Decovaton(MSD)挑战的公共数据集上评估,并实现了SOTA表演。
translated by 谷歌翻译
在过去的十年中,卷积神经网络(Convnets)主导了医学图像分析领域。然而,发现脉搏的性能仍然可以受到它们无法模拟图像中体素之间的远程空间关系的限制。最近提出了众多视力变压器来解决哀悼缺点,在许多医学成像应用中展示最先进的表演。变压器可以是用于图像配准的强烈候选者,因为它们的自我注意机制能够更精确地理解移动和固定图像之间的空间对应。在本文中,我们呈现透射帧,一个用于体积医学图像配准的混合变压器-Cromnet模型。我们还介绍了三种变速器的变形,具有两个散晶变体,确保了拓扑保存的变形和产生良好校准的登记不确定性估计的贝叶斯变体。使用来自两个应用的体积医学图像的各种现有的登记方法和变压器架构进行广泛验证所提出的模型:患者间脑MRI注册和幻影到CT注册。定性和定量结果表明,传输和其变体导致基线方法的实质性改进,展示了用于医学图像配准的变压器的有效性。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译