对于准确的模型,需要更少的数据,很少有射击学习表现出许多应用程序域中的鲁棒性和通用性。但是,在不信任的环境中部署少量模型可能会引起隐私问题,例如攻击或对手可能会违反用户提供的数据的隐私。本文通过建立一种新颖的隐私保存嵌入空间来维护数据的隐私空间,从而在不信任的环境中研究了少量学习的隐私增强,从而保留了数据的隐私并保持模型的准确性。我们研究了各种图像隐私方法的影响,例如模糊,像素化,高斯噪声和差异化私有像素化(DP-PIX)对几个图像分类的影响,并提出了一种通过关节损失学习隐私表示表示的方法。经验结果表明,如何为隐私增强的少数学习而谈判如何进行隐私性折衷。
translated by 谷歌翻译
从部分测量重建医学图像是计算机断层扫描(CT)和磁共振成像(MRI)中的重要逆问题。基于机器学习的现有解决方案通常训练模型,直接将测量线映射到医学图像,利用配对图像和测量的训练数据集。这些测量通常使用测量过程的固定物理模型从图像中合成,其阻碍了模型的泛化能力到未知的测量过程。为解决这个问题,我们提出了一种完全无监督的技术来解决逆问题,利用最近引入的基于分数的生成模型。具体而言,我们首先在医学图像上培训基于分数的生成模型,以捕获他们的先前分配。在测试时间上给定测量和测量过程的物理模型,我们介绍了一种采样方法来重建与先前和观察测量一致的图像。我们的方法在训练期间不假设固定的测量过程,因此可以灵活地适应于测试时间的不同测量过程。经验上,我们观察到CT和MRI中的几种医学成像任务中的可比性或更好的性能,同时对未知测量过程的概率显着展示了更好的概括。
translated by 谷歌翻译
上下文特征是构建时空人群流预测(STCFP)模型的重要数据源。但是,应用上下文的困难在于上下文特征(例如,天气,假日和利益点)和上下文建模技术在不同情况下的不明通用性。在本文中,我们开发了一个实验平台,该平台由大规模时空人群流数据,上下文数据和最新时空预测模型组成三个城市人群流动预测方案(自行车流,地铁乘客流量和电动汽车充电需求)中的技术。特别是,我们基于广泛研究的广泛研究来开发上下文建模技术的一般分类学。通过三个现实世界数据集,包括数百万记录和丰富的上下文数据,我们已经培训并测试了数百种不同的模型。我们的结果揭示了一些重要的观察:(1)使用更多的上下文特征可能并不总是通过现有上下文建模技术进行更好的预测;特别是,与其他上下文功能组合相比,假日和时间位置的上下文特征组合可以提供更多可概括的有益信息。 (2)在上下文建模技术中,使用门控单元将原始上下文特征纳入最先进的预测模型具有良好的概括性。此外,我们还为想要构建STCFP应用程序的从业者纳入上下文因素提供了一些建议。根据我们的发现,我们呼吁将来的研究工作致力于开发新的上下文处理和建模解决方案,以充分利用STCFP上下文功能的潜力。
translated by 谷歌翻译
时空人群流量预测(STCFP)问题是一种经典问题,具有丰富的现有研究工作,这些努力受益于传统的统计学习和最近的深度学习方法。虽然STCFP可以参考许多现实世界问题,但大多数现有研究都侧重于相当特定的应用,例如预测出租车需求,乘资顺序等。这会阻碍STCFP研究作为针对不同应用的方法几乎没有比较,因此如何将应用驱动的方法概括为其他场景尚不清楚。要填补这一差距,这篇论文进行了两项努力:(i)我们提出了一个叫做STANALYTIC的分析框架,以定性地调查其关于各种空间和时间因素的设计考虑的STCFP方法,旨在使不同的应用驱动的方法进行不同的方法; (ii)(ii)我们构建一个广泛的大型STCFP基准数据集,具有四种不同的场景(包括RideSharing,Bikesharing,Metro和电动车辆充电),其流量高达数亿个流量记录,以定量测量STCFP方法的普遍性。此外,为了详细说明STANalytic在帮助设计上推广的STCFP方法方面的有效性,我们提出了一种通过整合STANALYTIC鉴定的可推广的时间和空间知识来提出一种称为STETA的时空元模型。我们利用不同的深度学习技术实施STMETA的三种变体。通过数据集,我们证明Stmeta变体可以优于最先进的STCFP方法5%。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Domain adaptive detection aims to improve the generalization of detectors on target domain. To reduce discrepancy in feature distributions between two domains, recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning. However, they neglect the relationship between multiple granularities and different features in alignment, degrading detection. Addressing this, we introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning. The key is to encode the dependencies across different granularities including pixel-, instance-, and category-levels simultaneously to align two domains. Specifically, based on pixel-level features, we first develop an omni-scale gated fusion (OSGF) module to aggregate discriminative representations of instances with scale-aware convolutions, leading to robust multi-scale detection. Besides, we introduce multi-granularity discriminators to identify where, either source or target domains, different granularities of samples come from. Note that, MGA not only leverages instance discriminability in different categories but also exploits category consistency between two domains for detection. Furthermore, we present an adaptive exponential moving average (AEMA) strategy that explores model assessments for model update to improve pseudo labels and alleviate local misalignment problem, boosting detection robustness. Extensive experiments on multiple domain adaption scenarios validate the superiority of MGA over other approaches on FCOS and Faster R-CNN detectors. Code will be released at https://github.com/tiankongzhang/MGA.
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译