This paper proposes a non-data-driven deep neural network for spectral image recovery problems such as denoising, single hyperspectral image super-resolution, and compressive spectral imaging reconstruction. Unlike previous methods, the proposed approach, dubbed Mixture-Net, implicitly learns the prior information through the network. Mixture-Net consists of a deep generative model whose layers are inspired by the linear and non-linear low-rank mixture models, where the recovered image is composed of a weighted sum between the linear and non-linear decomposition. Mixture-Net also provides a low-rank decomposition interpreted as the spectral image abundances and endmembers, helpful in achieving remote sensing tasks without running additional routines. The experiments show the MixtureNet effectiveness outperforming state-of-the-art methods in recovery quality with the advantage of architecture interpretability.
translated by 谷歌翻译
来自光场的大量空间和角度信息允许开发多种差异估计方法。但是,对光场的获取需要高存储和处理成本,从而限制了该技术在实际应用中的使用。为了克服这些缺点,压缩感应(CS)理论使光学体系结构的开发能够获得单个编码的光场测量。该测量是使用需要高计算成本的优化算法或深神经网络来解码的。从压缩光场进行的传统差异估计方法需要首先恢复整个光场,然后再恢复后处理步骤,从而需要长时间。相比之下,这项工作提出了通过省略传统方法所需的恢复步骤来从单个压缩测量中进行快速差异估计。具体而言,我们建议共同优化用于获取单个编码光场快照和卷积神经网络(CNN)的光学体系结构,以估计差异图。在实验上,提出的方法估计了与使用深度学习方法重建的光场相当的差异图。此外,所提出的方法在训练和推理方面的速度比估计重建光场差异的最佳方法要快20倍。
translated by 谷歌翻译
计算光学成像(COI)系统利用其设置中的光学编码元素(CE)在单个或多个快照中编码高维场景,并使用计算算法对其进行解码。 COI系统的性能很大程度上取决于其主要组件的设计:CE模式和用于执行给定任务的计算方法。常规方法依赖于随机模式或分析设计来设置CE的分布。但是,深神经网络(DNNS)的可用数据和算法功能已在CE数据驱动的设计中开辟了新的地平线,该设计共同考虑了光学编码器和计算解码器。具体而言,通过通过完全可区分的图像形成模型对COI测量进行建模,该模型考虑了基于物理的光及其与CES的相互作用,可以在端到端优化定义CE和计算解码器的参数和计算解码器(e2e)方式。此外,通过在同一框架中仅优化CE,可以从纯光学器件中执行推理任务。这项工作调查了CE数据驱动设计的最新进展,并提供了有关如何参数化不同光学元素以将其包括在E2E框架中的指南。由于E2E框架可以通过更改损耗功能和DNN来处理不同的推理应用程序,因此我们提出低级任务,例如光谱成像重建或高级任务,例如使用基于任务的光学光学体系结构来增强隐私的姿势估计,以维护姿势估算。最后,我们说明了使用全镜DNN以光速执行的分类和3D对象识别应用程序。
translated by 谷歌翻译
数码相机的加速使用引起了人们对隐私和安全性的日益关注,尤其是在诸如行动识别之类的应用程序中。在本文中,我们提出了一个优化框架,以沿着人类行动识别管道提供强大的视觉隐私保护。我们的框架参数化了相机镜头,以成功地降低视频的质量,以抑制隐私属性并防止对抗性攻击,同时保持相关功能以进行活动识别。我们通过广泛的模拟和硬件实验来验证我们的方法。
translated by 谷歌翻译
深度学习模型是压缩光谱成像(CSI)恢复的最新模型。这些方法使用深神网络(DNN)作为图像发生器来学习从压缩测量到光谱图像的非线性映射。例如,深频谱先验方法在优化算法中使用卷积自动编码器网络(CAE)通过使用非线性表示来恢复光谱图像。但是,CAE训练与恢复问题分离,这不能保证CSI问题的光谱图像的最佳表示。这项工作提出了联合非线性表示和恢复网络(JR2NET),将表示和恢复任务链接到单个优化问题。 JR2NET由ADMM公式遵循优化启发的网络组成,该网络学习了非线性低维表示,并同时执行通过端到端方法训练的光谱图像恢复。实验结果表明,该方法的优势在PSNR中的改进高达2.57 dB,并且性能比最新方法快2000倍。
translated by 谷歌翻译
We introduce a novel framework to track multiple objects in overhead camera videos for airport checkpoint security scenarios where targets correspond to passengers and their baggage items. We propose a Self-Supervised Learning (SSL) technique to provide the model information about instance segmentation uncertainty from overhead images. Our SSL approach improves object detection by employing a test-time data augmentation and a regression-based, rotation-invariant pseudo-label refinement technique. Our pseudo-label generation method provides multiple geometrically-transformed images as inputs to a Convolutional Neural Network (CNN), regresses the augmented detections generated by the network to reduce localization errors, and then clusters them using the mean-shift algorithm. The self-supervised detector model is used in a single-camera tracking algorithm to generate temporal identifiers for the targets. Our method also incorporates a multi-view trajectory association mechanism to maintain consistent temporal identifiers as passengers travel across camera views. An evaluation of detection, tracking, and association performances on videos obtained from multiple overhead cameras in a realistic airport checkpoint environment demonstrates the effectiveness of the proposed approach. Our results show that self-supervision improves object detection accuracy by up to $42\%$ without increasing the inference time of the model. Our multi-camera association method achieves up to $89\%$ multi-object tracking accuracy with an average computation time of less than $15$ ms.
translated by 谷歌翻译
The availability of frequent and cost-free satellite images is in growing demand in the research world. Such satellite constellations as Landsat 8 and Sentinel-2 provide a massive amount of valuable data daily. However, the discrepancy in the sensors' characteristics of these satellites makes it senseless to use a segmentation model trained on either dataset and applied to another, which is why domain adaptation techniques have recently become an active research area in remote sensing. In this paper, an experiment of domain adaptation through style-transferring is conducted using the HRSemI2I model to narrow the sensor discrepancy between Landsat 8 and Sentinel-2. This paper's main contribution is analyzing the expediency of that approach by comparing the results of segmentation using domain-adapted images with those without adaptation. The HRSemI2I model, adjusted to work with 6-band imagery, shows significant intersection-over-union performance improvement for both mean and per class metrics. A second contribution is providing different schemes of generalization between two label schemes - NALCMS 2015 and CORINE. The first scheme is standardization through higher-level land cover classes, and the second is through harmonization validation in the field.
translated by 谷歌翻译
When a human communicates with a machine using natural language on the web and online, how can it understand the human's intention and semantic context of their talk? This is an important AI task as it enables the machine to construct a sensible answer or perform a useful action for the human. Meaning is represented at the sentence level, identification of which is known as intent detection, and at the word level, a labelling task called slot filling. This dual-level joint task requires innovative thinking about natural language and deep learning network design, and as a result, many approaches and models have been proposed and applied. This tutorial will discuss how the joint task is set up and introduce Spoken Language Understanding/Natural Language Understanding (SLU/NLU) with Deep Learning techniques. We will cover the datasets, experiments and metrics used in the field. We will describe how the machine uses the latest NLP and Deep Learning techniques to address the joint task, including recurrent and attention-based Transformer networks and pre-trained models (e.g. BERT). We will then look in detail at a network that allows the two levels of the task, intent classification and slot filling, to interact to boost performance explicitly. We will do a code demonstration of a Python notebook for this model and attendees will have an opportunity to watch coding demo tasks on this joint NLU to further their understanding.
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
Datacenter operators ensure fair and regular server maintenance by using automated processes to schedule maintenance jobs to complete within a strict time budget. Automating this scheduling problem is challenging because maintenance job duration varies based on both job type and hardware. While it is tempting to use prior machine learning techniques for predicting job duration, we find that the structure of the maintenance job scheduling problem creates a unique challenge. In particular, we show that prior machine learning methods that produce the lowest error predictions do not produce the best scheduling outcomes due to asymmetric costs. Specifically, underpredicting maintenance job duration has results in more servers being taken offline and longer server downtime than overpredicting maintenance job duration. The system cost of underprediction is much larger than that of overprediction. We present Acela, a machine learning system for predicting maintenance job duration, which uses quantile regression to bias duration predictions toward overprediction. We integrate Acela into a maintenance job scheduler and evaluate it on datasets from large-scale, production datacenters. Compared to machine learning based predictors from prior work, Acela reduces the number of servers that are taken offline by 1.87-4.28X, and reduces the server offline time by 1.40-2.80X.
translated by 谷歌翻译