无源域的适应性(SFDA)旨在通过仅使用预训练的源模型将分类器调整为未标记的目标数据集。但是,缺乏源数据和域移动使目标数据对目标数据的预测不可靠。我们建议量化源模型预测中的不确定性,并利用它来指导目标适应。为此,我们通过在网络参数上合并先验,构建一个概率源模型,从而在模型预测上诱导分布。通过采用拉普拉斯近似值来估算不确定性,并合并以识别不在源歧管中的目标数据点并在最大化目标数据上的共同信息时减少重量。与最近的作品不同,我们的概率处理是计算轻量级,脱离源训练和目标适应,并且不需要专门的源培训或模型体系结构的更改。我们显示了不确定性引导的SFDA比封闭设置和开放式设置中的传统SFDA的优势,并提供了经验证据,即即使没有调整,我们的方法对于强大的域转移也更为强大。
translated by 谷歌翻译
图像着色是计算机视觉中的一个众所周知的问题。但是,由于任务的性质不足,图像着色本质上是具有挑战性的。尽管研究人员已经进行了几次尝试制作着色管道自动化,但由于缺乏调理,这些过程通常会产生不切实际的结果。在这项工作中,我们试图将文本描述与要着色的灰度图像一起集成为辅助条件,以提高着色过程的忠诚度。据我们所知,这是将文本条件纳入着色管道中的首次尝试之一。为此,我们提出了一个新颖的深网,该网络采用了两个输入(灰度图像和相应的编码文本描述),并试图预测相关的颜色范围。由于各自的文本描述包含场景中存在的对象的颜色信息,因此文本编码有助于提高预测颜色的整体质量。我们已经使用不同的指标评估了我们提出的模型,并发现它在定性和定量上都优于最先进的着色算法。
translated by 谷歌翻译
在计算机视觉中,人类的姿势合成和转移与以前看不见的姿势的概率图像产生相关的概率图像产生。尽管研究人员最近提出了几种实现此任务的方法,但这些技术中的大多数直接从特定数据集中的所需目标图像中得出了姿势,这使得基础过程挑战在现实世界情景中应用于目标图像的生成是实际目标。在本文中,我们首先介绍当前姿势转移算法的缺点,然后提出一种新型的基于文本的姿势转移技术来解决这些问题。我们将问题分为三个独立的阶段:(a)文本构成表示,(b)姿势改进,(c)姿势渲染。据我们所知,这是开发基于文本的姿势转移框架的首次尝试之一,我们还通过为DeepFashion数据集的图像添加描述性姿势注释,从而引入了新的数据集DF-PASS。所提出的方法在我们的实验中产生了具有显着定性和定量得分的有希望的结果。
translated by 谷歌翻译
我们研究了类新型小说类发现的新任务(class-incd),该任务是指在未标记的数据集中发现新型类别的问题,该问题通过利用已在包含脱节的标签数据集上训练的预训练的模型,该模型已受过培训但是相关类别。除了发现新颖的课程外,我们还旨在维护模型识别先前看到的基本类别的能力。受到基于彩排的增量学习方法的启发,在本文中,我们提出了一种新颖的方法,以防止通过共同利用基类功能原型和特征级知识蒸馏来忘记对基础类的过去信息。我们还提出了一种自我训练的聚类策略,该策略同时将新颖的类别簇簇,并为基础和新颖类培训共同分类器。这使得我们的方法能够在课堂内设置中运行。我们的实验以三个共同的基准进行,表明我们的方法显着优于最先进的方法。代码可从https://github.com/oatmealliu/class-incd获得
translated by 谷歌翻译
Unsupervised learning-based anomaly detection in latent space has gained importance since discriminating anomalies from normal data becomes difficult in high-dimensional space. Both density estimation and distance-based methods to detect anomalies in latent space have been explored in the past. These methods prove that retaining valuable properties of input data in latent space helps in the better reconstruction of test data. Moreover, real-world sensor data is skewed and non-Gaussian in nature, making mean-based estimators unreliable for skewed data. Again, anomaly detection methods based on reconstruction error rely on Euclidean distance, which does not consider useful correlation information in the feature space and also fails to accurately reconstruct the data when it deviates from the training distribution. In this work, we address the limitations of reconstruction error-based autoencoders and propose a kernelized autoencoder that leverages a robust form of Mahalanobis distance (MD) to measure latent dimension correlation to effectively detect both near and far anomalies. This hybrid loss is aided by the principle of maximizing the mutual information gain between the latent dimension and the high-dimensional prior data space by maximizing the entropy of the latent space while preserving useful correlation information of the original data in the low-dimensional latent space. The multi-objective function has two goals -- it measures correlation information in the latent feature space in the form of robust MD distance and simultaneously tries to preserve useful correlation information from the original data space in the latent space by maximizing mutual information between the prior and latent space.
translated by 谷歌翻译
The usage of technologically advanced devices has seen a boom in many domains, including education, automation, and healthcare; with most of the services requiring Internet connectivity. To secure a network, device identification plays key role. In this paper, a device fingerprinting (DFP) model, which is able to distinguish between Internet of Things (IoT) and non-IoT devices, as well as uniquely identify individual devices, has been proposed. Four statistical features have been extracted from the consecutive five device-originated packets, to generate individual device fingerprints. The method has been evaluated using the Random Forest (RF) classifier and different datasets. Experimental results have shown that the proposed method achieves up to 99.8% accuracy in distinguishing between IoT and non-IoT devices and over 97.6% in classifying individual devices. These signify that the proposed method is useful in assisting operators in making their networks more secure and robust to security breaches and unauthorized access.
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译
Almost 80 million Americans suffer from hair loss due to aging, stress, medication, or genetic makeup. Hair and scalp-related diseases often go unnoticed in the beginning. Sometimes, a patient cannot differentiate between hair loss and regular hair fall. Diagnosing hair-related diseases is time-consuming as it requires professional dermatologists to perform visual and medical tests. Because of that, the overall diagnosis gets delayed, which worsens the severity of the illness. Due to the image-processing ability, neural network-based applications are used in various sectors, especially healthcare and health informatics, to predict deadly diseases like cancers and tumors. These applications assist clinicians and patients and provide an initial insight into early-stage symptoms. In this study, we used a deep learning approach that successfully predicts three main types of hair loss and scalp-related diseases: alopecia, psoriasis, and folliculitis. However, limited study in this area, unavailability of a proper dataset, and degree of variety among the images scattered over the internet made the task challenging. 150 images were obtained from various sources and then preprocessed by denoising, image equalization, enhancement, and data balancing, thereby minimizing the error rate. After feeding the processed data into the 2D convolutional neural network (CNN) model, we obtained overall training accuracy of 96.2%, with a validation accuracy of 91.1%. The precision and recall score of alopecia, psoriasis, and folliculitis are 0.895, 0.846, and 1.0, respectively. We also created a dataset of the scalp images for future prospective researchers.
translated by 谷歌翻译
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy "surrogate" algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques.
translated by 谷歌翻译
Prevailing methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual. Evaluating models in these terms presumes homogeneous preferences across the population and engenders selection of agglomerative AIs, which fail to represent the diverse range of interests across individuals. We propose an alternative evaluation method that instead prioritizes inclusive AIs, which provably retain the requisite knowledge not only for subsequent response customization to particular segments of the population but also for utility-maximizing decisions.
translated by 谷歌翻译