组织病理学癌症诊断已经变得更加复杂,并且越来越多的活组织检查是大多数病理实验室的挑战。因此,用于评估组织病理学癌细胞的自动化方法的发展是值。在这项研究中,我们使用了来自挪威队的624个整个乳腺癌(WSIS)乳腺癌。我们提出了一种级联卷积神经网络设计,称为H2G-NET,用于千兆子宫内病理学图像的语义分割。该设计涉及使用PATCH-WISE方法的检测阶段,以及使用卷积AutoEncoder的细化阶段。为了验证设计,我们进行了一个消融研究,以评估所选组分在管道上对肿瘤分割的影响。指导分割,使用等级取样和深热敷细化,在分割组织病理学图像时被证明是有益的。当使用细化网络后,我们发现了一种显着的改进,以便后处理产生的肿瘤分割热量。整体最佳设计在90个WSIS的独立测试集中实现了0.933的骰子得分。该设计表现优于单分辨率方法,例如使用MobileNetv2(0.872)和低分辨率U-Net(0.874)的聚类引导,Patch-Wise高分辨率分类。此外,代表性X400 WSI的分割〜58秒,仅使用CPU。调查结果展示了利用细化网络来改善修补程序预测的潜力。解决方案是有效的,不需要重叠的补丁推断或合并。此外,我们表明,可以使用随机采样方案训练深度神经网络,该方案同时在多个不同的标签上余下,而无需在磁盘上存储斑块。未来的工作应涉及更有效的补丁生成和采样,以及改进的聚类。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
人工智能在人类背景下的兴起使对系统的新需求是透明和可解释的。我们研究了一些与此类责任感相关的拟人化思想和原则,以开发一个理论框架,以思考复杂人类背景下的数字系统以及解释其行为的问题。在结构上,复杂的系统由模块化和层次组件制成,我们使用新的模式和模式过渡的新概念抽象地对其进行建模。模式是系统的独立组件,具有其自己的目标,监视数据和算法。模式的行为,包括其向其他模式的过渡,是由信念函数根据其目标和算法来解释模式监视数据的信念函数。我们展示了这些信念功能如何通过在较高维度的几何空间中可视化其评估来帮助解释系统行为。这些想法是由抽象和具体的简单复合物形式化的。
translated by 谷歌翻译
在本文中,我们使用一系列建模技术来调查抽象手机是否可以从接触语音声音中出现。实际上,该研究代表了尝试从语言使用的抽象出现的基于使用的语言学理论设备的尝试。我们的任务侧重于最简单的这样的假设抽象。我们测试了两个关于语言知识在语言上的语言知识的反对原则:基于内存的学习(MBL)和纠错学习(ECL)。泛化的过程得到了抽象语言学家与之运作,我们探讨了MBL和ECL是否可以产生类似语言抽象的语言知识。每个模型都有一个由一个扬声器产生的大量预处理语音。我们评估了这些简单模型所学到的一致性或稳定性以及它们引起抽象类别的能力。两种类型的模型在这些测试方面的票价不同。我们表明ECL模型可以从输入中可靠地识别了ECL模型可以学习抽象,并且至少可以从输入中可靠地识别到传统类型中的电话库存和分组。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Real-life tools for decision-making in many critical domains are based on ranking results. With the increasing awareness of algorithmic fairness, recent works have presented measures for fairness in ranking. Many of those definitions consider the representation of different ``protected groups'', in the top-$k$ ranked items, for any reasonable $k$. Given the protected groups, confirming algorithmic fairness is a simple task. However, the groups' definitions may be unknown in advance. In this paper, we study the problem of detecting groups with biased representation in the top-$k$ ranked items, eliminating the need to pre-define protected groups. The number of such groups possible can be exponential, making the problem hard. We propose efficient search algorithms for two different fairness measures: global representation bounds, and proportional representation. Then we propose a method to explain the bias in the representations of groups utilizing the notion of Shapley values. We conclude with an experimental study, showing the scalability of our approach and demonstrating the usefulness of the proposed algorithms.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译