During image editing, existing deep generative models tend to re-synthesize the entire output from scratch, including the unedited regions. This leads to a significant waste of computation, especially for minor editing operations. In this work, we present Spatially Sparse Inference (SSI), a general-purpose technique that selectively performs computation for edited regions and accelerates various generative models, including both conditional GANs and diffusion models. Our key observation is that users tend to make gradual changes to the input image. This motivates us to cache and reuse the feature maps of the original image. Given an edited image, we sparsely apply the convolutional filters to the edited regions while reusing the cached features for the unedited regions. Based on our algorithm, we further propose Sparse Incremental Generative Engine (SIGE) to convert the computation reduction to latency reduction on off-the-shelf hardware. With 1.2%-area edited regions, our method reduces the computation of DDIM by 7.5$\times$ and GauGAN by 18$\times$ while preserving the visual fidelity. With SIGE, we accelerate the speed of DDIM by 3.0x on RTX 3090 and 6.6$\times$ on Apple M1 Pro CPU, and GauGAN by 4.2$\times$ on RTX 3090 and 14$\times$ on Apple M1 Pro CPU.
translated by 谷歌翻译
图形学习模型是研究人员探索图形结构数据的关键工具。为了训练功能强大的图形学习模型,常规方法使用足够的训练数据来训练单个设备上的图形模型。但是,由于隐私问题,在实际情况下这样做是令人难以置信的。联合学习提供了一种可行的解决方案,可以通过引入各种隐私性机制(例如图形边缘的差异隐私)来解决此类限制。然而,联合图学习中的差异隐私可确保图表中维护的分类信息。它降低了图形学习模型的性能。在本文中,我们研究了如何在图形边缘实施差异隐私,并观察实验中的性能下降。我们还注意到,图形边缘的差异隐私引入了扰动图邻近性的噪音,这是图形对比度学习中的图形增强。受到的启发,我们建议利用图形对比学习的优势,以减轻差异隐私引起的性能下降。广泛的实验是通过几种代表性的图形模型和广泛使用的数据集进行的,表明对比度学习确实减轻了由差异隐私引起的模型的性能下降。
translated by 谷歌翻译
姿势估计在以人为本的视力应用中起关键作用。但是,由于高计算成本(每帧超过150 GMAC),很难在资源受限的边缘设备上部署最新的基于HRNET的姿势估计模型。在本文中,我们研究了在边缘实时多人姿势估计的有效体系结构设计。我们透露,通过我们的逐渐收缩实验,HRNET的高分辨率分支对于低计量区域的模型是多余的。删除它们可以提高效率和性能。受这一发现的启发,我们设计了LitePose,这是一种有效的单分支架构,用于姿势估计,并引入了两种简单的方法来增强LitePose的能力,包括Fusion Deconv Head和大型内核Corvs。 Fusion deconv头部删除了高分辨率分支中的冗余,从而使尺度感知的特征融合且开销低。大型内核会大大提高模型的能力和接受场,同时保持低计算成本。只有25%的计算增量,7x7内核的实现+14.0地图优于人群数据集上的3x3内核。在移动平台上,LitePose与先前最新的有效姿势估计模型相比,LitePose将潜伏期最高可达5.0倍,而无需牺牲性能,从而推动了实时多人姿势估计的边界。我们的代码和预培训模型在https://github.com/mit-han-lab/litepose上发布。
translated by 谷歌翻译
有条件的生成对冲网络(CGANS)为许多视觉和图形应用程序启用了可控图像合成。然而,最近的CGANS比现代识别CNNS更加计算密集型1-2个数量级。例如,Gaugan每张图像消耗281G Mac,而MobileNet-V3的0.44g Mac相比,使交互式部署难以实现。在这项工作中,我们提出了一种通用压缩框架,用于减少CGAN中发电机的推理时间和模型大小。直接应用现有的压缩方法由于GaN培训的难度和发电机架构的差异而产生差的性能。我们以两种方式解决了这些挑战。首先,为了稳定GaN培训,我们将原型模型的多个中间表示的知识转移到其压缩模型,统一未配对和配对的学习。其次,我们的方法通过神经架构搜索找到高效的架构,而不是重用现有的CNN设计。为了加速搜索过程,我们通过重量共享解耦模型培训并搜索。实验证明了我们在不同监督环境,网络架构和学习方法中的方法的有效性。在没有损失图像质量的情况下,我们将Cycleangan,Pix2pix的Cryclan,Pix2pix的计算计算为12倍,Munit By 29X,Gaugan,通过9倍,为交互式图像合成铺平道路。
translated by 谷歌翻译
Incremental text-to-speech, also known as streaming TTS, has been increasingly applied to online speech applications that require ultra-low response latency to provide an optimal user experience. However, most of the existing speech synthesis pipelines deployed on GPU are still non-incremental, which uncovers limitations in high-concurrency scenarios, especially when the pipeline is built with end-to-end neural network models. To address this issue, we present a highly efficient approach to perform real-time incremental TTS on GPUs with Instant Request Pooling and Module-wise Dynamic Batching. Experimental results demonstrate that the proposed method is capable of producing high-quality speech with a first-chunk latency lower than 80ms under 100 QPS on a single NVIDIA A10 GPU and significantly outperforms the non-incremental twin in both concurrency and latency. Our work reveals the effectiveness of high-performance incremental TTS on GPUs.
translated by 谷歌翻译
通过移动激光扫描和图像构建有色点的云是测量和映射的基本工作。它也是为智能城市建造数字双胞胎的重要先决条件。但是,现有的公共数据集要么是相对较小的规模,要么缺乏准确的几何和彩色地面真理。本文记录了一个名为Polyu-BPComa的多功能数据集,该数据集可独特地定位于移动着色映射。该数据集在背包平台上包含3D激光雷达,球形成像,GNSS和IMU的资源。颜色检查器板在每个调查区域粘贴,因为目标和地面真相数据是由先进的陆地激光扫描仪(TLS)收集的。 3D几何信息和颜色信息可以分别在背包系统和TLS产生的有色点云中恢复。因此,我们提供了一个机会,可以同时为移动多感官系统对映射和着色精度进行基准测试。该数据集的尺寸约为800 GB,涵盖室内和室外环境。数据集和开发套件可在https://github.com/chenpengxin/polyu-bpcoma.git上找到。
translated by 谷歌翻译
可以用半监督学习算法执行地震阻抗反转,该算法只需要几个日志作为标签,而且不太可能过度装备。然而,经典的半监督学习算法通常导致预测阻抗图像上的伪影。在这个艺术中,我们从两个方面改善了半监督学习。首先,通过用2-D CNN层和2-D MaxPooling层替换深度学习结构中的1-D卷积神经网络(CNN)层,提高了预测精度。其次,还可以通过将网络嵌入到贝叶斯推断框架中来估计预测不确定性。在网络的前向传播期间使用本地重新支柱化技巧以降低采样成本。用Marmousi2模型和缝模型测试验证了拟议策略的可行性。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译