Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.
translated by 谷歌翻译
During the deployment of deep neural networks (DNNs) on edge devices, many research efforts are devoted to the limited hardware resource. However, little attention is paid to the influence of dynamic power management. As edge devices typically only have a budget of energy with batteries (rather than almost unlimited energy support on servers or workstations), their dynamic power management often changes the execution frequency as in the widely-used dynamic voltage and frequency scaling (DVFS) technique. This leads to highly unstable inference speed performance, especially for computation-intensive DNN models, which can harm user experience and waste hardware resources. We firstly identify this problem and then propose All-in-One, a highly representative pruning framework to work with dynamic power management using DVFS. The framework can use only one set of model weights and soft masks (together with other auxiliary parameters of negligible storage) to represent multiple models of various pruning ratios. By re-configuring the model to the corresponding pruning ratio for a specific execution frequency (and voltage), we are able to achieve stable inference speed, i.e., keeping the difference in speed performance under various execution frequencies as small as possible. Our experiments demonstrate that our method not only achieves high accuracy for multiple models of different pruning ratios, but also reduces their variance of inference latency for various frequencies, with minimal memory consumption of only one model and one soft mask.
translated by 谷歌翻译
How to effectively leverage the plentiful existing datasets to train a robust and high-performance model is of great significance for many practical applications. However, a model trained on a naive merge of different datasets tends to obtain poor performance due to annotation conflicts and domain divergence.In this paper, we attempt to train a unified model that is expected to perform well across domains on several popularity segmentation datasets.We conduct a detailed analysis of the impact on model generalization from three aspects of data augmentation, training strategies, and model capacity.Based on the analysis, we propose a robust solution that is able to improve model generalization across domains.Our solution ranks 2nd on RVC 2022 semantic segmentation task, with a dataset only 1/3 size of the 1st model used.
translated by 谷歌翻译
Vertical federated learning is a trending solution for multi-party collaboration in training machine learning models. Industrial frameworks adopt secure multi-party computation methods such as homomorphic encryption to guarantee data security and privacy. However, a line of work has revealed that there are still leakage risks in VFL. The leakage is caused by the correlation between the intermediate representations and the raw data. Due to the powerful approximation ability of deep neural networks, an adversary can capture the correlation precisely and reconstruct the data. To deal with the threat of the data reconstruction attack, we propose a hashing-based VFL framework, called \textit{HashVFL}, to cut off the reversibility directly. The one-way nature of hashing allows our framework to block all attempts to recover data from hash codes. However, integrating hashing also brings some challenges, e.g., the loss of information. This paper proposes and addresses three challenges to integrating hashing: learnability, bit balance, and consistency. Experimental results demonstrate \textit{HashVFL}'s efficiency in keeping the main task's performance and defending against data reconstruction attacks. Furthermore, we also analyze its potential value in detecting abnormal inputs. In addition, we conduct extensive experiments to prove \textit{HashVFL}'s generalization in various settings. In summary, \textit{HashVFL} provides a new perspective on protecting multi-party's data security and privacy in VFL. We hope our study can attract more researchers to expand the application domains of \textit{HashVFL}.
translated by 谷歌翻译
Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion. In general, these parties have a group of users in common but own different features. Existing VFL frameworks use cryptographic techniques to provide data privacy and security guarantees, leading to a line of works studying computing efficiency and fast implementation. However, the security of VFL's model remains underexplored.
translated by 谷歌翻译
Monocular 3D object detection is a low-cost but challenging task, as it requires generating accurate 3D localization solely from a single image input. Recent developed depth-assisted methods show promising results by using explicit depth maps as intermediate features, which are either precomputed by monocular depth estimation networks or jointly evaluated with 3D object detection. However, inevitable errors from estimated depth priors may lead to misaligned semantic information and 3D localization, hence resulting in feature smearing and suboptimal predictions. To mitigate this issue, we propose ADD, an Attention-based Depth knowledge Distillation framework with 3D-aware positional encoding. Unlike previous knowledge distillation frameworks that adopt stereo- or LiDAR-based teachers, we build up our teacher with identical architecture as the student but with extra ground-truth depth as input. Credit to our teacher design, our framework is seamless, domain-gap free, easily implementable, and is compatible with object-wise ground-truth depth. Specifically, we leverage intermediate features and responses for knowledge distillation. Considering long-range 3D dependencies, we propose \emph{3D-aware self-attention} and \emph{target-aware cross-attention} modules for student adaptation. Extensive experiments are performed to verify the effectiveness of our framework on the challenging KITTI 3D object detection benchmark. We implement our framework on three representative monocular detectors, and we achieve state-of-the-art performance with no additional inference computational cost relative to baseline models. Our code is available at https://github.com/rockywind/ADD.
translated by 谷歌翻译
When reading a story, humans can rapidly understand new fictional characters with a few observations, mainly by drawing analogy to fictional and real people they met before in their lives. This reflects the few-shot and meta-learning essence of humans' inference of characters' mental states, i.e., humans' theory-of-mind (ToM), which is largely ignored in existing research. We fill this gap with a novel NLP benchmark, TOM-IN-AMC, the first assessment of models' ability of meta-learning of ToM in a realistic narrative understanding scenario. Our benchmark consists of $\sim$1,000 parsed movie scripts for this purpose, each corresponding to a few-shot character understanding task; and requires models to mimic humans' ability of fast digesting characters with a few starting scenes in a new movie. Our human study verified that humans can solve our problem by inferring characters' mental states based on their previously seen movies; while the state-of-the-art metric-learning and meta-learning approaches adapted to our task lags 30% behind.
translated by 谷歌翻译
随着方法的发展,反转主要分为两个步骤。第一步是图像嵌入,其中编码器或优化过程嵌入图像以获取相应的潜在代码。之后,第二步旨在完善反转和编辑结果,我们将其命名为“结果”。尽管第二步显着提高了忠诚度,但感知和编辑性几乎没有变化,深处取决于第一步中获得的反向潜在代码。因此,一个关键问题是在保留重建保真度的同时获得更好的感知和编辑性的潜在代码。在这项工作中,我们首先指出,这两个特征与合成分布的逆代码的对齐程度(或不对准)有关。然后,我们提出了潜在空间比对反转范式(LSAP),该范式由评估度量和解决方案组成。具体来说,我们引入了归一化样式空间($ \ Mathcal {s^n} $ space)和$ \ Mathcal {s^n} $ cosine距离(SNCD)以测量反转方法的不对准。由于我们提出的SNCD是可区分的,因此可以在基于编码器和基于优化的嵌入方法中进行优化,以执行均匀的解决方案。在各个域中进行的广泛实验表明,SNCD有效地反映了感知和编辑性,并且我们的对齐范式在两个步骤中都归档了最新的。代码可在https://github.com/caopulan/ganinverter上找到。
translated by 谷歌翻译
域适应性(DA)旨在转移标记良好的源域的知识,以促进未标记的目标学习。当转向特定的任务,例如室内(Wi-Fi)本地化时,必须学习跨域回归剂以减轻域移位。本文提出了一种新颖的方法对抗性双向反应器网络(ABRNET),以寻求更有效的跨域回归模型。具体而言,开发了差异的双向试剂架构,以最大化双向试验的差异,以发现远离源分布的不确定目标实例,然后在特征提取器和双回归器之间采用了对抗性训练机制,以产生域内不变的表示。为了进一步弥合大域间隙,设计了一个特定域的增强模块,旨在合成两个源相似和类似的类似中间域,以逐渐消除原始域的不匹配。对两个跨域回归基准的实证研究说明了我们方法解决域自适应回归(DAR)问题的力量。
translated by 谷歌翻译
早期退出是提高深网推理效率的有效范例。通过构建具有不同资源需求的分类器(出口),此类网络可以在早期出口处输出简单的样本,从而消除了执行更深层的需求。尽管现有作品主要关注多EXIT网络的建筑设计,但此类模型的培训策略在很大程度上没有探索。当前的最新模型在培训期间对所有样品进行了相同的处理。但是,在测试过程中的早期外观行为被忽略了,从而导致训练和测试之间存在差距。在本文中,我们建议通过样品加权来弥合这一差距。从直觉上讲,简单的样品通常在推理期间在网络早期退出,应该为培训早期分类器提供更多贡献。但是,晚期分类器应强调硬样品的培训(主要是从更深层退出)。我们的工作建议采用一个体重预测网络,以加重每个出口处不同训练样本的损失。这个重量预测网络和骨干模型在具有新的优化目标的元学习框架下共同优化。通过将推断期间的适应性行为带入训练阶段,我们表明拟议的加权机制始终提高分类准确性和推理效率之间的权衡。代码可在https://github.com/leaplabthu/l2w-den上找到。
translated by 谷歌翻译