Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]". Based on exhaustive text cues in "[CONTEXT]", CLIP model is aware of different contexts, e.g. background, style, viewpoint, and exhibits unprecedented robustness against a wide range of distribution shifts. However, recent works find further fine-tuning of CLIP models improves accuracy but sacrifices the robustness on downstream tasks. We conduct an empirical investigation to show fine-tuning will corrupt the context-aware ability of pre-trained CLIP features. To solve this problem, we propose Context-Aware Robust Fine-tuning (CAR-FT). CAR-FT regularizes the model during fine-tuning to capture the context information. Specifically, we use zero-shot prompt weights to get the context distribution contained in the image. By minimizing the Kullback-Leibler Divergence (KLD) between context distributions induced by original/fine-tuned CLIP models, CAR-FT makes the context-aware ability of CLIP inherited into downstream tasks, and achieves both higher In-Distribution (ID) and Out-Of-Distribution (OOD) accuracy. The experimental results show CAR-FT achieves superior robustness on five OOD test datasets of ImageNet, and meanwhile brings accuracy gains on nine downstream tasks. Additionally, CAR-FT surpasses previous Domain Generalization (DG) methods and gets 78.5% averaged accuracy on DomainBed benchmark, building the new state-of-the-art.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
专门针对联合学习(SDAGFL)的定向无环图(SDAGFL)是一个新的联合学习框架,它通过有向的无循环图分布式分类帐技术(DAG-DLT)从设备中更新模型。 SDAGFL具有个性化的优势,可抵抗完全分散的联邦学习中的单点失败和中毒攻击。由于这些优点,SDAGFL适用于在设备通常由电池供电的物联网场景中进行联合学习。为了促进SDAGFL在物联网中的应用,我们提出了一个基于ESDAGFL的基于事件触发的通信机制的能量优化的。在ESDAGFL中,仅当新模型发生显着更改时才会广播。我们在莎士比亚和歌德的作品中从群集的合成女性数据集中评估了eSDAGFL的合成女性数据集和数据集。实验结果表明,与SDAGFL相比,我们的方法可以将能源消耗降低33 \%,并在训练准确性和专业化之间达到与SDAGFL相同的平衡。
translated by 谷歌翻译
Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.
translated by 谷歌翻译
对抗性训练(AT)通常被认为是防御对抗性例子的最有效的方法之一,可能会在很大程度上损害标准绩效,因此对工业规模的生产和应用的有用性有限。令人惊讶的是,这种现象在自然语言处理(NLP)任务中完全相反,在该任务中甚至可以从中受益。我们注意到NLP任务中AT的优点可能来自离散和符号输入空间。为了借用NLP风格的优势,我们提出了离散的对抗训练(DAT)。 DAT利用VQGAN改革图像数据以离散类似文本的输入,即视觉单词。然后,它可以最大程度地减少这种离散图像的最大风险,并具有符号对抗扰动。我们从分布的角度进一步提供了解释,以证明DAT的有效性。作为增强视觉表示的插件技术,DAT可以在多个任务上取得重大改进,包括图像分类,对象检测和自我监督学习。尤其是,该模型通过胶带自动编码(MAE)预先训练并由我们的DAT进行微调,而没有额外的数据可以在Imagenet-C上获得31.40 MCE,并且在Stylized-Imagenet上进行了32.77%的TOP-1准确性,建立了新的状态 - 艺术。该代码将在https://github.com/alibaba/easyrobust上找到。
translated by 谷歌翻译
联合学习(FL)是一个新的分布式机器学习框架,可以在不收集用户的私人数据的情况下获得可靠的协作培训。但是,由于FL的频繁沟通和平均聚合策略,他们会遇到挑战统计多样性数据和大规模模型。在本文中,我们提出了一个个性化的FL框架,称为基于Tensor分解的个性化联合学习(TDPFED),在该框架中,我们设计了一种具有张力的线性层和卷积层的新颖的张力局部模型,以降低交流成本。 TDPFED使用双级损失函数来通过控制个性化模型和张力的本地模型之间的差距来使全球模型学习的个性化模型优化。此外,有效的分布式学习策略和两种不同的模型聚合策略是为拟议的TDPFED框架设计的。理论融合分析和彻底的实验表明,我们提出的TDPFED框架在降低交流成本的同时实现了最新的性能。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
对将AI功能从云上的数据中心转移到边缘或最终设备的需求越来越大,这是由在智能手机,AR/VR设备,自动驾驶汽车和各种汽车上运行的快速实时AI的应用程序举例说明的。物联网设备。然而,由于DNN计算需求与边缘或最终设备上的计算能力之间的较大增长差距,这种转变受到了严重的阻碍。本文介绍了XGEN的设计,这是DNN的优化框架,旨在弥合差距。 XGEN将横切共同设计作为其一阶考虑。它的全栈AI面向AI的优化包括在DNN软件堆栈的各个层的许多创新优化,所有这些优化都以合作的方式设计。独特的技术使XGEN能够优化各种DNN,包括具有极高深度的DNN(例如Bert,GPT,其他变形金刚),并生成代码比现有DNN框架中的代码快几倍,同时提供相同的准确性水平。
translated by 谷歌翻译
在本文中,我们处理了一个通用分布式约束的在线学习问题,并在随着时间变化的网络上进行了隐私,其中考虑了一类不可分配的目标功能。在此设置下,每个节点仅控制全球决策变量的一部分,所有节点的目标是在时间范围内协作最小化全球目标,同时保证传输信息的安全性。对于此类问题,我们首先设计了一种新颖的通用算法框架,称为DPSDA,使用Laplace机制和双重平均方法的随机变体进行了差异性私有分布式在线学习。然后,我们建议在此框架下提出两种算法,称为DPSDA-C和DPSDA-PS。理论结果表明,两种算法都达到了预期的遗憾上度上限$ \ MATHCAL {O}(\ sqrt {t})$当目标函数是凸的时,它符合通过切割边缘算法来实现的最佳效用。最后,数值实验在现实世界和随机生成的数据集上都验证了我们算法的有效性。
translated by 谷歌翻译
由于红外图像的背景和噪音复杂,红外小目标检测是计算机视觉领域中最困难的问题之一。在大多数现有研究中,语义分割方法通常用于取得更好的结果。每个目标的质心是根据分割图作为检测结果计算的。相比之下,我们提出了一个新颖的端到端框架,用于在本文中针对小型目标检测和分割。首先,通过将UNET用作保持分辨率和语义信息的主链,我们的模型可以通过附加简单的无锚头来实现比其他最先进方法更高的检测精度。然后,使用金字塔池模块来进一步提取特征并提高目标分割的精度。接下来,我们使用语义分割任务,这些任务更加关注像素级特征,以帮助对象检测的训练过程,从而提高了平均精度,并允许模型检测一些以前无法检测到的目标。此外,我们开发了用于红外小目标检测和分割的多任务框架。与复合单任务模型相比,我们的多任务学习模型在保持准确性的同时,将复杂性降低了近一半,并将推断加速近两次。代码和模型可在https://github.com/chenastron/mtunet上公开获得。
translated by 谷歌翻译