胸部计算机断层扫描(CT)成像为肺部传染病(如结核病(TB))的诊断和管理增添了宝贵的见解。但是,由于成本和资源的限制,只有X射线图像可用于初步诊断或在治疗过程中进行后续比较成像。由于其投影性,X射线图像可能更难解释临床医生。缺乏公开配对的X射线和CT图像数据集使训练3D重建模型的挑战。此外,胸部X射线放射学可能依赖具有不同图像质量的不同设备方式,并且潜在的种群疾病谱可能会在输入中产生多样性。我们提出了形状诱导,也就是说,在没有CT监督的情况下从X射线中学习3D CT的形状,作为一种新型技术,可以在训练重建模型的训练过程中结合现实的X射线分布。我们的实验表明,这一过程既提高了产生的CT的感知质量,也可以提高肺传染病的下游分类的准确性。
translated by 谷歌翻译
阅读和驾驶等日常任务的核心是主动对象识别。目前无法合并时间来阻碍建模此类任务的尝试。人们在速度和准确性之间表现出灵活的权衡,而这种权衡是至关重要的人类技能。深层神经网络已成为预测人类对象识别峰值和神经活动的有前途的候选人。但是,建模时间维度,即速度准确性权衡(SAT),对于它们作为人类如何识别对象的有用计算模型至关重要。为此,我们在这里介绍了第一个大规模(148个观察者,4个神经网络,8个任务)数据集,该数据集是识别Imagenet图像时速度准确性折衷(SAT)。在每个人类试验中,哔哔声表示所需的反应时间,在显示图像后以固定的延迟发出声音,并且观察者的响应仅在哔哔声附近发生时才计算。在一系列块中,我们测试了许多蜂鸣延迟,即反应时间。我们观察到人类的准确性随反应时间的增加而增加,并继续将其特征与能够推理时间自适应计算的几个动态神经网络的行为进行比较。我们将FLOPS作为反应时间的模拟,我们将网络与人类在曲线拟合误差,类别相关性和曲线陡度中进行比较,并得出结论,级联的动态神经网络是对象识别任务中人类反应时间的有希望的模型。
translated by 谷歌翻译
深度学习和计算机视觉的最新进展减轻了许多瓶颈,从而使算法无标记,并且性能更好。具体而言,变形金刚提供了图像的全球视角,该图像卷积神经网络(CNN)缺乏设计。在这里,我们介绍了跨体系结构自学,这是一种新颖的自我监督学习方法,同时利用了变形金刚和CNN,同时也可以通过易于可用的云服务在计算上访问。与现有的最先进的自我监督学习方法相比,我们从经验上显示了经过CASS训练的CNN,而Transformers则使用100%标记的数据,平均获得8.5%,具有10%标记的数据,为11.5%,1.5%,1百分比在三个不同数据集中标记的数据。值得注意的是,一个被使用的数据集包括自身免疫性疾病的组织病理学幻灯片,这是医学成像中代表性不足的主题,并且数据最少。此外,我们的发现表明,就训练时间而言,CASS的效率是其他最先进方法的两倍。
translated by 谷歌翻译
对无监督对象发现的现有方法(UOD)不会向大大扩展到大型数据集,而不会损害其性能的近似。我们提出了一种新颖的UOD作为排名问题的制定,适用于可用于特征值问题和链接分析的分布式方法的阿森纳。通过使用自我监督功能,我们还展示了UOD的第一个有效的完全无监督的管道。对Coco和OpenImages的广泛实验表明,在每个图像中寻求单个突出对象的单对象发现设置中,所提出的LOD(大规模对象发现)方法与之相当于或更好地中型数据集的艺术(最多120K图像),比能够缩放到1.7M图像的唯一其他算法超过37%。在每个图像中寻求多个对象的多对象发现设置中,所提出的LOD平均精度(AP)比所有其他用于从20K到1.7M图像的数据的方法更好。使用自我监督功能,我们还表明该方法在OpenImages上获得最先进的UOD性能。我们的代码在HTTPS://github.com/huyvvo/lod上公开提供。
translated by 谷歌翻译
Objective: Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. Method: We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. Conclusion: The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. Significance: This study shows the potential of using semi-supervised GAN-based classification to improve bladder tissue classification when annotations are limited in multi-domain data.
translated by 谷歌翻译
The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: https://github.com/QiaoXiao7282/DSN.
translated by 谷歌翻译
While the problem of hallucinations in neural machine translation has long been recognized, so far the progress on its alleviation is very little. Indeed, recently it turned out that without artificially encouraging models to hallucinate, previously existing methods fall short and even the standard sequence log-probability is more informative. It means that characteristics internal to the model can give much more information than we expect, and before using external models and measures, we first need to ask: how far can we go if we use nothing but the translation model itself ? We propose to use a method that evaluates the percentage of the source contribution to a generated translation. Intuitively, hallucinations are translations "detached" from the source, hence they can be identified by low source contribution. This method improves detection accuracy for the most severe hallucinations by a factor of 2 and is able to alleviate hallucinations at test time on par with the previous best approach that relies on external models. Next, if we move away from internal model characteristics and allow external tools, we show that using sentence similarity from cross-lingual embeddings further improves these results.
translated by 谷歌翻译
We pose video object segmentation as spectral graph clustering in space and time, with one graph node for each pixel and edges forming local space-time neighborhoods. We claim that the strongest cluster in this video graph represents the salient object. We start by introducing a novel and efficient method based on 3D filtering for approximating the spectral solution, as the principal eigenvector of the graph's adjacency matrix, without explicitly building the matrix. This key property allows us to have a fast parallel implementation on GPU, orders of magnitude faster than classical approaches for computing the eigenvector. Our motivation for a spectral space-time clustering approach, unique in video semantic segmentation literature, is that such clustering is dedicated to preserving object consistency over time, which we evaluate using our novel segmentation consistency measure. Further on, we show how to efficiently learn the solution over multiple input feature channels. Finally, we extend the formulation of our approach beyond the segmentation task, into the realm of object tracking. In extensive experiments we show significant improvements over top methods, as well as over powerful ensembles that combine them, achieving state-of-the-art on multiple benchmarks, both for tracking and segmentation.
translated by 谷歌翻译
Metric Elicitation (ME) is a framework for eliciting classification metrics that better align with implicit user preferences based on the task and context. The existing ME strategy so far is based on the assumption that users can most easily provide preference feedback over classifier statistics such as confusion matrices. This work examines ME, by providing a first ever implementation of the ME strategy. Specifically, we create a web-based ME interface and conduct a user study that elicits users' preferred metrics in a binary classification setting. We discuss the study findings and present guidelines for future research in this direction.
translated by 谷歌翻译
Learning-based image compression has improved to a level where it can outperform traditional image codecs such as HEVC and VVC in terms of coding performance. In addition to good compression performance, device interoperability is essential for a compression codec to be deployed, i.e., encoding and decoding on different CPUs or GPUs should be error-free and with negligible performance reduction. In this paper, we present a method to solve the device interoperability problem of a state-of-the-art image compression network. We implement quantization to entropy networks which output entropy parameters. We suggest a simple method which can ensure cross-platform encoding and decoding, and can be implemented quickly with minor performance deviation, of 0.3% BD-rate, from floating point model results.
translated by 谷歌翻译