肝脏是脊椎动物中最关键的代谢器官之一,由于其在人体中的重要功能,例如废物产物和药物的血液排毒。由于肝肿瘤引起的肝病是全球最常见的死亡率之一。因此,在肿瘤发育的早期阶段检测肝肿瘤是医疗治疗的关键部分。许多成像方式可以用作检测肝肿瘤的帮助工具。计算机断层扫描(CT)是软组织器官(例如肝脏)最常用的成像方式。这是因为它是一种侵入性方式,可以相对迅速捕获。本文提出了一个有效的自动肝分割框架,以使用3D CNN深度元网络模型检测和分割肝脏腹部扫描。许多研究采用了精确分割肝区域,然后使用分割的肝区域作为肿瘤分割方法的输入,因为它降低了由于将腹部器官分割为肿瘤而导致的错误率。所提出的3D CNN DeepMedic模型具有两个输入途径,而不是一个途径,如原始3D CNN模型所示。在本文中,该网络提供了多个腹部CT版本,这有助于提高细分质量。提出的模型分别达到94.36%,94.57%,91.86%和93.14%的精度,灵敏度,特异性和骰子相似性得分。实验结果表明该方法的适用性。
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 1.6$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 1.3B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.
translated by 谷歌翻译
As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.
translated by 谷歌翻译
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, downlink channel state information (CSI) needs to be sent from users back to the base station (BS), which causes prohibitive feedback overhead. In this paper, we propose a lightweight and adaptive deep learning-based CSI feedback scheme by capitalizing on deep equilibrium models. Different from existing deep learning-based approaches that stack multiple explicit layers, we propose an implicit equilibrium block to mimic the process of an infinite-depth neural network. In particular, the implicit equilibrium block is defined by a fixed-point iteration and the trainable parameters in each iteration are shared, which results in a lightweight model. Furthermore, the number of forward iterations can be adjusted according to the users' computational capability, achieving an online accuracy-efficiency trade-off. Simulation results will show that the proposed method obtains a comparable performance as the existing benchmarks but with much-reduced complexity and permits an accuracy-efficiency trade-off at runtime.
translated by 谷歌翻译
Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet. Source Code available at: https://github.com/fschmid56/EfficientAT
translated by 谷歌翻译
最近,由于其对交通清算的重大影响,交通事故风险预测的问题一直引起了智能运输系统社区的关注。通过使用数据驱动的方法来对空间和时间事件的影响进行建模,因此在文献中通常可以解决此问题,因为它们被证明对于交通事故风险预测问题至关重要。为了实现这一目标,大多数方法构建了不同的体系结构以捕获时空相关性功能,从而使它们对大型交通事故数据集效率低下。因此,在这项工作中,我们提出了一个新颖的统一框架,即是上下文视觉变压器,可以通过端到端的方法进行培训,该方法可以有效地建议问题的空间和时间方面,同时提供准确的交通事故。风险预测。我们评估并比较了我们提出的方法的性能与来自两个不同地理位置的两个大规模交通事故数据集的文献的基线方法。结果表明,与文献中先前的最新作品(SOTA)相比,RMSE得分的重大改善大约为2 \%。此外,我们提出的方法在两个数据集上优于SOTA技术,而仅需要少23倍的计算要求。
translated by 谷歌翻译
随着协作机器人进入工业商店的地板,物流和制造业,对人机互动的快速和灵活评估变得越来越重要。用于虚拟和增强现实的消费者耳机的可用性降低了虚拟环境的进入障碍。在本文中,我们探讨了使用此类环境在用户研究中模拟机器人的不同方面,并介绍了我们自己的研究工作的第一个发现。最后,我们建议在人类机器人相互作用中应用和使用模拟的方向。
translated by 谷歌翻译
由于时空事件发生的随机性,在报告的交通中断开始时缺乏信息,并且缺乏运输工程的高级方法来从过去中获得见解,因此预测交通事故持续时间是一个难题事故。本文提出了一个新的Fusion框架,用于通过将机器学习与交通流量/速度和事件描述作为功能进行集成来预测有限信息的事件持续时间,并通过多种深度​​学习方法编码(ANN AUTOCONEDER和角色级别的LSTM-ANN情绪分类器)。该论文在运输和数据科学中构建了跨学科建模方法。该方法提高了适用于基线事件报告的最佳表现ML模型的入射持续时间预测准确性。结果表明,与标准线性或支持矢量回归模型相比,我们提出的方法可以提高准确性$ 60 \%$,并且相对于混合深度学习自动编码的GBDT模型的另外7美元\%$改进,这似乎胜过表现所有其他模型。应用区是旧金山市,富含交通事件日志(全国交通事故数据集)和过去的历史交通拥堵信息(Caltrans绩效测量系统的5分钟精度测量)。
translated by 谷歌翻译
我们考虑学习控制问题的最佳阈值策略的问题。阈值策略通过评估系统状态的元素是否超过一定阈值来做出控制决策,其值由系统状态的其他元素决定。通过利用阈值策略的单调特性,我们证明他们的政策梯度具有令人惊讶的简单表达方式。我们使用这种简单的表达方式来构建一种范围的演员批评算法,以学习最佳阈值策略。仿真结果表明,由于其能够利用单调属性的能力,我们的政策大大优于其他强化学习算法。此外,我们表明,Whittle Index是一种用于躁动的多臂匪徒问题的强大工具,相当于替代问题的最佳阈值策略。该观察结果导致了一种简单的算法,该算法通过学习替代问题中的最佳阈值策略来找到Whittle索引。仿真结果表明,我们的算法比最近通过间接手段学习小索引的一些研究快得多。
translated by 谷歌翻译