Time series anomaly detection strives to uncover potential abnormal behaviors and patterns from temporal data, and has fundamental significance in diverse application scenarios. Constructing an effective detection model usually requires adequate training data stored in a centralized manner, however, this requirement sometimes could not be satisfied in realistic scenarios. As a prevailing approach to address the above problem, federated learning has demonstrated its power to cooperate with the distributed data available while protecting the privacy of data providers. However, it is still unclear that how existing time series anomaly detection algorithms perform with decentralized data storage and privacy protection through federated learning. To study this, we conduct a federated time series anomaly detection benchmark, named FedTADBench, which involves five representative time series anomaly detection algorithms and four popular federated learning methods. We would like to answer the following questions: (1)How is the performance of time series anomaly detection algorithms when meeting federated learning? (2) Which federated learning method is the most appropriate one for time series anomaly detection? (3) How do federated time series anomaly detection approaches perform on different partitions of data in clients? Numbers of results as well as corresponding analysis are provided from extensive experiments with various settings. The source code of our benchmark is publicly available at https://github.com/fanxingliu2020/FedTADBench.
translated by 谷歌翻译
Ensemble learning serves as a straightforward way to improve the performance of almost any machine learning algorithm. Existing deep ensemble methods usually naively train many different models and then aggregate their predictions. This is not optimal in our view from two aspects: i) Naively training multiple models adds much more computational burden, especially in the deep learning era; ii) Purely optimizing each base model without considering their interactions limits the diversity of ensemble and performance gains. We tackle these issues by proposing deep negative correlation classification (DNCC), in which the accuracy and diversity trade-off is systematically controlled by decomposing the loss function seamlessly into individual accuracy and the correlation between individual models and the ensemble. DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated. Thanks to the optimized diversities, DNCC works well even when utilizing a shared network backbone, which significantly improves its efficiency when compared with most existing ensemble systems. Extensive experiments on multiple benchmark datasets and network structures demonstrate the superiority of the proposed method.
translated by 谷歌翻译
Copy-Paste is a simple and effective data augmentation strategy for instance segmentation. By randomly pasting object instances onto new background images, it creates new training data for free and significantly boosts the segmentation performance, especially for rare object categories. Although diverse, high-quality object instances used in Copy-Paste result in more performance gain, previous works utilize object instances either from human-annotated instance segmentation datasets or rendered from 3D object models, and both approaches are too expensive to scale up to obtain good diversity. In this paper, we revisit Copy-Paste at scale with the power of newly emerged zero-shot recognition models (e.g., CLIP) and text2image models (e.g., StableDiffusion). We demonstrate for the first time that using a text2image model to generate images or zero-shot recognition model to filter noisily crawled images for different object categories is a feasible way to make Copy-Paste truly scalable. To make such success happen, we design a data acquisition and processing framework, dubbed "X-Paste", upon which a systematic study is conducted. On the LVIS dataset, X-Paste provides impressive improvements over the strong baseline CenterNet2 with Swin-L as the backbone. Specifically, it archives +2.6 box AP and +2.1 mask AP gains on all classes and even more significant gains with +6.8 box AP +6.5 mask AP on long-tail classes.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
据报道,深度学习系统可以在许多应用程序中实现最新的性能,关键是在基准数据集中存在训练有素的分类器。作为主流损失函数,交叉熵很容易导致我们找到表现出严重过度拟合行为的模型。在本文中,我们表明现有的交叉熵损失最小化问题基本上了解了数据集的基础数据分布的标签条件熵(CE)。但是,以这种方式学习的CE并不能很好地表征标签和输入共享的信息。在本文中,我们提出了一个共同的信息学习框架,在该框架中,我们通过学习标签和输入之间的相互信息来训练深层神经网络分类器。从理论上讲,我们在相互信息方面给出了人口分类误差的下限。此外,我们在$ \ mathbb {r}^n $中的混凝土二进制分类数据模型以及在这种情况下的错误概率下限中得出了相互信息的下限和上限。从经验上讲,我们在几个基准数据集上进行了广泛的实验,以支持我们的理论。相互学习的分类器(MILC)比有条件的熵学习分类器(CELC)取得更好的概括性能,其改进在测试准确性方面可能超过10 \%。
translated by 谷歌翻译
本文介绍了Omnivl,这是一种新的基础模型,旨在使用一种通用体系结构来支持图像语言和视频语言任务。它为图像和视频输入采用了统一的基于变压器的视觉编码器,因此可以执行联合图像语言和视频语言预处理。我们首次证明了这样的范式受益于图像和视频任务,而不是传统的单向传输(例如,使用图像语言来帮助视频语言)。为此,我们提出了对图像语言和视频语言的脱钩关节预处理,以有效地将视觉模型分解为空间和时间维度,并在图像和视频任务上获得性能提升。此外,我们引入了一种新颖的统一视觉对比度(UNIVLC)损失,以利用图像文本,视频文本,图像标签(例如,图像分类),视频标签(例如,视频动作识别)在一起受到监督和吵闹的监督预处理数据都尽可能多地利用。无需额外的任务适配器,Omnivl可以同时支持仅视觉任务(例如,图像分类,视频操作识别),跨模式对齐任务(例如,图像/视频 - 文本检索)和多模式理解和生成任务(例如,图像/视频问答,字幕)。我们在各种下游任务上评估Omnivl,并以相似的模型大小和数据量表获得最新的或竞争结果。
translated by 谷歌翻译
视觉变压器(VIT)最近在一系列计算机视觉任务中占据了主导地位,但训练数据效率低下,局部语义表示能力较低,而没有适当的电感偏差。卷积神经网络(CNNS)固有地捕获了区域感知语义,激发了研究人员将CNN引入VIT的架构中,以为VIT提供理想的诱导偏见。但是,嵌入在VIT中的微型CNN实现的位置是否足够好?在本文中,我们通过深入探讨混合CNNS/VIT的宏观结构如何增强层次VIT的性能。特别是,我们研究了令牌嵌入层,别名卷积嵌入(CE)的作用,并系统地揭示了CE如何在VIT中注入理想的感应偏置。此外,我们将最佳CE配置应用于最近发布的4个最先进的Vits,从而有效地增强了相应的性能。最后,释放了一个有效的混合CNN/VIT家族,称为CETNET,可以用作通用的视觉骨架。具体而言,CETNET在Imagenet-1K上获得了84.9%的TOP-1准确性(从头开始训练),可可基准上的48.6%的盒子地图和ADE20K上的51.6%MIOU,从而显着提高了相应的最新态度的性能。艺术基线。
translated by 谷歌翻译
视觉语言(VL)预训练最近受到了广泛的关注。但是,大多数现有的端到端预训练方法只旨在解决诸如图像文本检索,视觉询问答案(VQA)和图像字幕等VL任务,以测试对图像的高级了解,或者仅对目标区域进行测试 - 对诸如短语接地和对象检测等任务的水平理解。我们提出了Fiber(基于回避的变压器),这是一种新的VL模型体系结构,可以无缝处理这两种类型的任务。 Fiber没有将多模式融合到模型深处,而不是将融合后的专用变压器层用于融合,而是通过将交叉注意力插入图像和文本骨干杆中,从而在记忆和性能方面带来了增长。此外,与以前的工作不同,它要么仅在图像文本数据上进行训练,要么在带有框级注释的细粒度数据上进行培训,我们提出了一种两阶段的预训练策略,该策略有效地使用了这两种数据:(( i)基于图像文本数据的粗粒细化预训练;然后是(ii)基于图像文本框数据的细粒度预训练。我们对各种VL任务进行全面的实验,从VQA,图像字幕和检索到短语接地,参考表达理解和对象检测。使用深层多模式融合,结合两阶段的预训练,光纤可对所有任务的强基础进行一致的性能改进,通常使用幅度更优于更多数据的方法。代码可从https://github.com/microsoft/fiber获得。
translated by 谷歌翻译
近年来,统一的视觉语言框架已经大大提高,其中大多数采用编码器架构将图像文本任务统一为序列到序列的生成。但是,现有的视频语言(VIDL)模型仍需要在每个任务的模型体系结构和培训目标中进行特定于任务的设计。在这项工作中,我们探索了一个统一的VIDL框架薰衣草,其中蒙版语言建模(MLM)用作所有前训练和下游任务的常见接口。这样的统一导致了简化的模型体系结构,在多模式编码器之上,只需要一个轻巧的MLM头,而不是具有更多参数的解码器。令人惊讶的是,实验结果表明,这个统一的框架在14个VIDL基准测试中实现了竞争性能,涵盖了视频问答,文本到视频检索和视频字幕。广泛的分析进一步证明了薰衣草比现有VIDL方法的优势:(i)在多任务列出时仅使用一组参数值支持所有下游任务; (ii)对各种下游任务的几乎没有概括; (iii)在视频问题回答任务上启用零射门评估。代码可从https://github.com/microsoft/lavender获得。
translated by 谷歌翻译
随机梯度下降(SGD)是现代机器学习(ML)系统的基石。尽管具有其计算效率,但SGD仍需要随机数据访问,这些数据访问在依赖块可调地理的二级存储的系统中实现效率低下,例如HDD和SSD,例如TensorFlow/Pytorch和DB ML系统,而不是大文件。为了解决这种阻抗不匹配,已经提出了各种数据改组策略,以平衡SGD的收敛速率(有利于随机性)及其I/O性能(有利于顺序访问)。在本文中,我们首先对现有数据改组策略进行系统的实证研究,该研究表明,所有现有策略都有改进的空间 - 它们都在I/O性能或融合率方面受苦。考虑到这一点,我们提出了一种简单但新颖的分层数据改组策略Corgipile。与现有的策略相比,Corgipile避免了完整的数据洗牌,同时保持SGD的可比收敛速度,就好像执行了完整的混音一样。我们对Corgipile的融合行为提供了非平凡的理论分析。我们通过在新的CorgipileDataSet API中设计新的平行/分布式洗牌操作员来进一步将Corgipile整合到Pytorch中。我们还通过介绍具有优化的三个新的物理运营商,将Corgipile集成到PostgreSQL中。我们的实验结果表明,Corgipile可以与全面的SGD达到可比的收敛速率,以实现深度学习和广义线性模型。对于ImageNet数据集的深度学习模型,Corgipile比带有完整数据洗牌的Pytorch快1.5倍。对于具有线性模型的INDB ML,在HDD和SSD上,Corgipile的Corgipile比两个最先进的IN-DB ML系统(Apache Madlib和Bismarck)快1.6 x-12.8倍。
translated by 谷歌翻译