模型推理的成本效率对于现实世界机器学习(ML)应用至关重要,尤其是对于延迟敏感的任务和资源有限的设备。一个典型的困境是:为了提供复杂的智能服务(例如智能城市),我们需要多种ML模型的推理结果,但是成本预算(例如GPU内存)不足以运行所有这些结果。在这项工作中,我们研究了黑盒ML模型之间的基本关系,并提出了一项新的学习任务:模型链接,该任务旨在通过学习映射(配音模型链接)之间的输出空间之间的学习映射(配音模型链接)来弥合不同的黑盒模型的知识。我们提出了模型链接的设计,该链接支持链接异质的黑盒ML模型。同样,为了解决分布差异挑战,我们提出了模型链接的适应和聚合方法。根据我们提出的模型链接,我们开发了一种名为MLINK的调度算法。通过通过模型链接启用的协作多模型推断,麦克林可以提高成本预算下获得的推理结果的准确性。我们在具有七个不同的ML型号和两个现实世界的视频分析系统和3,264小时的视频上评估了多模式数据集上的麦克林。实验结果表明,我们提出的模型链接可以在各种黑盒模型之间有效构建。在GPU内存的预算下,MLINK可以节省66.7%的推理计算,同时保留94%的推理准确性,这表现优于多任务学习,基于强化的基于强化的计划调度程序和框架过滤基线。
translated by 谷歌翻译
我们介绍了Equivariant卷积算法的框架,该算法是针对具有任意SU($ d $)对称性的物理系统的许多机器学习任务而定制的。它使我们能够增强量子计算的自然模型 - 渗透量子计算(PQC)[量子INF。Comput。,10,470-497(2010)] - 并定义了一个更强大的模型:PQC+。虽然PQC被证明是有效的经典模拟,但我们表现出一个可以在PQC+机器上有效解决的问题,而最著名的经典算法则以$ O(N!n^2)$时间运行,从而提供了强有力的证据,从而提供了反对PQC+的证据。经典的模拟。我们进一步讨论可以在PQC+范式中执行的实用量子机学习算法。
translated by 谷歌翻译
我们为$ S_N $-Quivariant Quantum卷积电路,建立并大大概括了Jordan的置力量子计算(PQC)形式主义的理论框架。我们表明量子电路是傅里叶空间神经架构的自然选择,其在计算$ S_N $ -Fourier系数的矩阵元素中,与在对称组上的最佳已知的经典快速傅里叶变换(FFT)相比计算的超级指数加速。特别是,我们利用Okounkov-Vershik方法来证明Harrow的陈述(Ph.D.论文2005 P.160)在$ \ OperatorName {su}(d)$ - 和$ s_n $-frirep基地之间并建立$ s_n $-arequivariant卷积量子交替使用年轻Jucys-Murphy(YJM)元素的ans {\“a} tze($ s_n $ -cqa)。我们证明了$ s_n $ -cqa是密集的,因此在每美元内表达S_N $-Frirep块,其可以作为潜在的未来量子机器学习和优化应用成为普遍模型。我们的方法提供了另一种方法来证明量子近似优化算法(QAOA)的普遍性,从表示理论的角度来看。我们的框架可以自然地应用于全局$ \ Operatorname {su}(d)$对称性的各种问题。我们展示了数值模拟以展示ANS {\“A} TEE的有效性,以找到标志结构$ j_1 $ - $ j_2 $反铁磁性Heisenberg模型在矩形和矩形状态Kagome格子。我们的工作确定了特定机器学习问题的量子优势,并提供了庆祝的Okounkov-Vershik的表示理论的第一次应用于机器学习和量子物理学。
translated by 谷歌翻译
量子信息技术的快速发展显示了在近期量子设备中模拟量子场理论的有希望的机会。在这项工作中,我们制定了1+1尺寸$ \ lambda \ phi \ phi^4 $量子场理论的(时间依赖性)变异量子模拟理论,包括编码,状态准备和时间演化,并具有多个数值模拟结果。这些算法可以理解为Jordan-Lee-Preskill算法的近期变异类似物,这是使用通用量子设备模拟量子场理论的基本算法。此外,我们强调了基于LSZ降低公式和几种计算效率的谐波振荡器基础编码的优势,例如在实施单一耦合群集ANSATZ的肺泡版本时,以准备初始状态。我们还讨论了如何在量子场理论仿真中规避“光谱拥挤”问题,并根据州和子空间保真度评估我们的算法。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译