本文总结并评估了追求人工智能(AI)系统公平性的各种方法,方法和技术。它检查了这些措施的优点和缺点,并提出了定义,测量和防止AI偏见的实际准则。特别是,它警告了一些简单而常见的方法来评估AI系统中的偏见,并提供更复杂和有效的替代方法。该论文还通过在高影响力AI系统的不同利益相关者之间提供通用语言来解决该领域的广泛争议和困惑。它描述了涉及AI公平的各种权衡,并提供了平衡它们的实用建议。它提供了评估公平目标成本和收益的技术,并定义了人类判断在设定这些目标中的作用。本文为AI从业者,组织领导者和政策制定者提供了讨论和指南,以及针对更多技术受众的其他材料的各种链接。提供了许多现实世界的例子,以从实际角度阐明概念,挑战和建议。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Reference-based Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image or video by introducing an additional high-resolution (HR) reference image. Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images. However, performing local transfer is difficult because of two gaps between input and reference images: the transformation gap (e.g., scale and rotation) and the resolution gap (e.g., HR and LR). To tackle these challenges, we propose C2-Matching in this work, which performs explicit robust matching crossing transformation and resolution. 1) To bridge the transformation gap, we propose a contrastive correspondence network, which learns transformation-robust correspondences using augmented views of the input image. 2) To address the resolution gap, we adopt teacher-student correlation distillation, which distills knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR matching. 3) Finally, we design a dynamic aggregation module to address the potential misalignment issue between input images and reference images. In addition, to faithfully evaluate the performance of Reference-based Image Super-Resolution under a realistic setting, we contribute the Webly-Referenced SR (WR-SR) dataset, mimicking the practical usage scenario. We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image. Extensive experiments demonstrate that our proposed C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark and also boosts the performance of video SR by incorporating the C2-Matching component into Video SR pipelines.
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for the novel class of kidney, unseen in training, using between approximately 40\% to 60\% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6\% and 10.2\% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.
translated by 谷歌翻译
Data-driven modeling approaches such as jump tables are promising techniques to model populations of resistive random-access memory (ReRAM) or other emerging memory devices for hardware neural network simulations. As these tables rely on data interpolation, this work explores the open questions about their fidelity in relation to the stochastic device behavior they model. We study how various jump table device models impact the attained network performance estimates, a concept we define as modeling bias. Two methods of jump table device modeling, binning and Optuna-optimized binning, are explored using synthetic data with known distributions for benchmarking purposes, as well as experimental data obtained from TiOx ReRAM devices. Results on a multi-layer perceptron trained on MNIST show that device models based on binning can behave unpredictably particularly at low number of points in the device dataset, sometimes over-promising, sometimes under-promising target network accuracy. This paper also proposes device level metrics that indicate similar trends with the modeling bias metric at the network level. The proposed approach opens the possibility for future investigations into statistical device models with better performance, as well as experimentally verified modeling bias in different in-memory computing and neural network architectures.
translated by 谷歌翻译
Fake videos represent an important misinformation threat. While existing forensic networks have demonstrated strong performance on image forgeries, recent results reported on the Adobe VideoSham dataset show that these networks fail to identify fake content in videos. In this paper, we propose a new network that is able to detect and localize a wide variety of video forgeries and manipulations. To overcome challenges that existing networks face when analyzing videos, our network utilizes both forensic embeddings to capture traces left by manipulation, context embeddings to exploit forensic traces' conditional dependencies upon local scene content, and spatial attention provided by a deep, transformer-based attention mechanism. We create several new video forgery datasets and use these, along with publicly available data, to experimentally evaluate our network's performance. These results show that our proposed network is able to identify a diverse set of video forgeries, including those not encountered during training. Furthermore, our results reinforce recent findings that image forensic networks largely fail to identify fake content in videos.
translated by 谷歌翻译
Three-dimensional (3D) freehand ultrasound (US) reconstruction without a tracker can be advantageous over its two-dimensional or tracked counterparts in many clinical applications. In this paper, we propose to estimate 3D spatial transformation between US frames from both past and future 2D images, using feed-forward and recurrent neural networks (RNNs). With the temporally available frames, a further multi-task learning algorithm is proposed to utilise a large number of auxiliary transformation-predicting tasks between them. Using more than 40,000 US frames acquired from 228 scans on 38 forearms of 19 volunteers in a volunteer study, the hold-out test performance is quantified by frame prediction accuracy, volume reconstruction overlap, accumulated tracking error and final drift, based on ground-truth from an optical tracker. The results show the importance of modelling the temporal-spatially correlated input frames as well as output transformations, with further improvement owing to additional past and/or future frames. The best performing model was associated with predicting transformation between moderately-spaced frames, with an interval of less than ten frames at 20 frames per second (fps). Little benefit was observed by adding frames more than one second away from the predicted transformation, with or without LSTM-based RNNs. Interestingly, with the proposed approach, explicit within-sequence loss that encourages consistency in composing transformations or minimises accumulated error may no longer be required. The implementation code and volunteer data will be made publicly available ensuring reproducibility and further research.
translated by 谷歌翻译
我们提出了一种基于机器学习的新型方法来解决涉及大量独立关注者的二重性程序,作为一种特殊情况,其中包括两阶段随机编程。我们提出了一个优化模型,该模型明确考虑了追随者的采样子集,并利用机器学习模型来估计未采样关注者的客观值。与现有方法不同,我们将机器学习模型培训嵌入到优化问题中,这使我们能够采用无法使用领导者决策来表示的一般追随者功能。我们证明了由原始目标函数衡量的生成领导者决策的最佳差距,该目标函数考虑了整个追随者集。然后,我们开发追随者采样算法来收紧界限和一种表示追随者功能的表示方法,可以用作嵌入式机器学习模型的输入。使用骑自行车网络设计问题的合成实例,我们比较方法的计算性能与基线方法。我们的方法为追随者的目标价值观提供了更准确的预测,更重要的是,产生了更高质量的领导者决策。最后,我们对骑自行车基础设施计划进行了现实世界中的案例研究,我们采用方法来解决超过一百万关注者的网络设计问题。与当前的自行车网络扩展实践相比,我们的方法提出了有利的性能。
translated by 谷歌翻译
倾斜的随机生存森林(RSF)是一种用于右翼结果的合奏监督学习方法。斜RSF中的树是使用预测变量的线性组合生长的,以创建分支,而在标准RSF中,使用单个预测变量。倾斜的RSF集合通常比标准RSF合奏具有更高的预测准确性。但是,评估预测变量的所有可能的线性组合会诱导大量的计算开销,从而将应用限制为大规模数据集。此外,几乎没有开发用于解释斜RSF合奏的方法,与基于轴的对应物相比,它们仍然难以解释。我们介绍了一种提高斜力RSF计算效率的方法,以及一种用斜RSF估计单个预测变量重要性的方法。我们减少计算开销的策略是利用牛顿 - 拉夫森评分(Newton-Raphson)评分,这是一种经典的优化技术,我们适用于决策树的每个非叶子节点内的COX部分似然函数。我们通过在线性组合中否定了用于给定预测指标的每个系数,然后计算出降低的降低准确性,从而估计单个预测因子对斜RSF的重要性。通常,在基准测试实验中,我们发现,与现有的斜RSF相比,与现有软件相比,我们对斜RSF的实现速度约为450倍,而较高的Brier得分则要高450倍。我们在模拟研究中发现,“否定重要性”比置换重要性,莎普利添加性解释和先前引入的技术更可靠地区分相关和无关的预测因子,以基于方差分析来衡量斜RSF的可变重要性。当前研究中引入的方法可在AORSF R软件包中获得。
translated by 谷歌翻译