Our work targets at searching feasible adversarial perturbation to attack a classifier with high-dimensional categorical inputs in a domain-agnostic setting. This is intrinsically an NP-hard knapsack problem where the exploration space becomes explosively larger as the feature dimension increases. Without the help of domain knowledge, solving this problem via heuristic method, such as Branch-and-Bound, suffers from exponential complexity, yet can bring arbitrarily bad attack results. We address the challenge via the lens of multi-armed bandit based combinatorial search. Our proposed method, namely FEAT, treats modifying each categorical feature as pulling an arm in multi-armed bandit programming. Our objective is to achieve highly efficient and effective attack using an Orthogonal Matching Pursuit (OMP)-enhanced Upper Confidence Bound (UCB) exploration strategy. Our theoretical analysis bounding the regret gap of FEAT guarantees its practical attack performance. In empirical analysis, we compare FEAT with other state-of-the-art domain-agnostic attack methods over various real-world categorical data sets of different applications. Substantial experimental observations confirm the expected efficiency and attack effectiveness of FEAT applied in different application scenarios. Our work further hints the applicability of FEAT for assessing the adversarial vulnerability of classification systems with high-dimensional categorical inputs.
translated by 谷歌翻译
In this paper, we show the surprisingly good properties of plain vision transformers for body pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model dubbed ViTPose. Specifically, ViTPose employs the plain and non-hierarchical vision transformer as an encoder to encode features and a lightweight decoder to decode body keypoints in either a top-down or a bottom-up manner. It can be scaled up from about 20M to 1B parameters by taking advantage of the scalable model capacity and high parallelism of the vision transformer, setting a new Pareto front for throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, and pre-training and fine-tuning strategy. Based on the flexibility, a novel ViTPose+ model is proposed to deal with heterogeneous body keypoint categories in different types of body pose estimation tasks via knowledge factorization, i.e., adopting task-agnostic and task-specific feed-forward networks in the transformer. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our ViTPose model outperforms representative methods on the challenging MS COCO Human Keypoint Detection benchmark at both top-down and bottom-up settings. Furthermore, our ViTPose+ model achieves state-of-the-art performance simultaneously on a series of body pose estimation tasks, including MS COCO, AI Challenger, OCHuman, MPII for human keypoint detection, COCO-Wholebody for whole-body keypoint detection, as well as AP-10K and APT-36K for animal keypoint detection, without sacrificing inference speed.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
Self-supervised pre-training vision transformer (ViT) via masked image modeling (MIM) has been proven very effective. However, customized algorithms should be carefully designed for the hierarchical ViTs, e.g., GreenMIM, instead of using the vanilla and simple MAE for the plain ViT. More importantly, since these hierarchical ViTs cannot reuse the off-the-shelf pre-trained weights of the plain ViTs, the requirement of pre-training them leads to a massive amount of computational cost, thereby incurring both algorithmic and computational complexity. In this paper, we address this problem by proposing a novel idea of disentangling the hierarchical architecture design from the self-supervised pre-training. We transform the plain ViT into a hierarchical one with minimal changes. Technically, we change the stride of linear embedding layer from 16 to 4 and add convolution (or simple average) pooling layers between the transformer blocks, thereby reducing the feature size from 1/4 to 1/32 sequentially. Despite its simplicity, it outperforms the plain ViT baseline in classification, detection, and segmentation tasks on ImageNet, MS COCO, Cityscapes, and ADE20K benchmarks, respectively. We hope this preliminary study could draw more attention from the community on developing effective (hierarchical) ViTs while avoiding the pre-training cost by leveraging the off-the-shelf checkpoints. The code and models will be released at https://github.com/ViTAE-Transformer/HPViT.
translated by 谷歌翻译
多年来,Yolo系列一直是有效对象检测的事实上的行业级别标准。尤洛社区(Yolo Community)绝大多数繁荣,以丰富其在众多硬件平台和丰富场景中的使用。在这份技术报告中,我们努力将其限制推向新的水平,以坚定不移的行业应用心态前进。考虑到对真实环境中速度和准确性的多种要求,我们广泛研究了行业或学术界的最新对象检测进步。具体而言,我们从最近的网络设计,培训策略,测试技术,量化和优化方法中大量吸收了思想。最重要的是,我们整合了思想和实践,以在各种规模上建立一套可供部署的网络,以适应多元化的用例。在Yolo作者的慷慨许可下,我们将其命名为Yolov6。我们还向用户和贡献者表示热烈欢迎,以进一步增强。为了了解性能,我们的Yolov6-N在NVIDIA TESLA T4 GPU上以1234 fps的吞吐量在可可数据集上击中35.9%的AP。 Yolov6-S在495 fps处的43.5%AP罢工,在相同规模〜(Yolov5-S,Yolox-S和Ppyoloe-S)上超过其他主流探测器。我们的量化版本的Yolov6-S甚至在869 fps中带来了新的43.3%AP。此外,与其他推理速度相似的检测器相比,Yolov6-m/L的精度性能(即49.5%/52.3%)更好。我们仔细进行了实验以验证每个组件的有效性。我们的代码可在https://github.com/meituan/yolov6上提供。
translated by 谷歌翻译
这项工作使用熵调查的放松随机控制视角作为设计增强学习(RL)算法的原则框架。本代理通过根据最佳放松政策分配的嘈杂控制来与环境进行交互。一方面,嘈杂的政策探索了空间,因此有助于学习,但另一方面,通过为非最佳行为分配积极的可能性来引入偏见。这种探索解释权取舍取决于熵正规化的强度。我们研究了两种熵正则化公式产生的算法:探索性控制方法,其中熵被添加到成本目标以及近端政策更新方法中,熵惩罚了两个连续事件之间的策略差异。我们分析了有限的地平线连续时间线性季度(LQ)RL问题,这两种算法都产生了高斯轻松的策略。我们量化了高斯政策的价值函数与其嘈杂评估之间的确切差异,并表明执行噪声必须在整个时间内独立。通过调整轻松策略的采样频率和管理熵正则强度的参数,我们证明,对于两种学习算法而言,遗憾是$ \ MATHCAL {O}(\ sqrt {n})的顺序(上升)超过$ n $插曲的对数因素),与文献相符。
translated by 谷歌翻译
大型视觉基础模型在自然图像上的视觉任务上取得了重大进展,在这种情况下,视觉变压器是其良好可扩展性和表示能力的主要选择。但是,在现有模型仍处于小规模的情况下,遥感社区(RS)社区中大型模型的利用仍然不足,从而限制了性能。在本文中,我们使用约1亿个参数求助于普通视觉变压器,并首次尝试提出针对RS任务定制的大型视觉模型,并探索如此大型模型的性能。具体而言,要处理RS图像中各种取向的较大图像大小和对象,我们提出了一个新的旋转型尺寸的窗户注意力,以替代变形金刚中的原始关注,这可以大大降低计算成本和内存足迹,同时学习更好的对象通过从生成的不同窗口中提取丰富上下文来表示。关于检测任务的实验证明了我们模型的优越性,超过了所有最新模型,在DOTA-V1.0数据集上实现了81.16 \%地图。与现有的高级方法相比,我们在下游分类和细分任务上的模型结果也证明了竞争性能。进一步的实验显示了我们模型对计算复杂性和几乎没有学习的优势。代码和模型将在https://github.com/vitae-transformer/remote-sensing-rvsa上发布
translated by 谷歌翻译
当前对象检测器通常具有用于多级特征融合(MFF)的特征金字塔(FP)模块,该模块旨在减轻不同级别的特征之间的差距,并形成全面的对象表示以实现更好的检测性能。但是,它们通常需要较重的跨层次连接或迭代精炼才能获得更好的MFF结果,从而使它们在结构上变得复杂且计算效率低下。为了解决这些问题,我们提出了一种新颖有效的上下文建模机制,可以帮助现有的FPS提供更好的MFF结果,同时有效地降低计算成本。特别是,我们介绍了一种新颖的见解,即可以将综合背景分解并凝结成两种类型的表示,以提高效率。这两种表示包括本地集中的表示和全球汇总表示形式,前者着重于从附近地区提取上下文提示,而后者将整个图像场景的关键表示形式提取为全局上下文提示。通过收集凝结的环境,我们采用变压器解码器来研究它们与FP的每个局部特征之间的关系,然后相应地完善MFF结果。结果,我们获得了一个简单且轻巧的基于变压器的上下文冷凝(TCC)模块,该模块可以提高各种FPS并同时降低其计算成本。关于挑战性的可可数据集的广泛实验结果表明,TCC与四个代表性FPS兼容,并始终将其检测准确性提高到平均精度高达7.8%,并将其复杂性降低到GFLOPS上,以帮助高达20%。他们更有效地实现最先进的绩效。代码将发布。
translated by 谷歌翻译
基于内部语言模型估计(ILME)语言模型(LM)融合已显示出明显改善的识别结果,而识别域内和跨域语音识别任务的常规浅融合。在本文中,我们试图将ILME方法应用于跨域代码转换语音识别(CSSR)工作。具体而言,我们的好奇心来自几个方面。首先,我们很好奇基于ILME的LM融合对内域和跨域CSSR任务的有效性。我们在不合并两个代码转换域的情况下对此进行验证。更重要的是,我们通过合并两个单语言数据集训练端到端(E2E)语音识别模型,并观察到拟议的基于ILME的LM Fusion对CSSR的功效。来自东南亚和另一个中国大陆CS数据集的SEAME的实验结果证明了拟议的基于ILME的LM融合方法的有效性。
translated by 谷歌翻译
本文介绍了我们针对CVPR2022通用事件边界字幕(GEBC)竞赛的冠军解决方案。 GEBC要求字幕模型对给定视频边界周围的瞬时状态变化具有理解,这使其比传统的视频字幕任务更具挑战性。在本文中,提出了对视频内容编码和字幕生成的改进的双流变压器:(1)我们利用三个预训练的模型从不同的粒度中提取视频功能。此外,我们利用边界的类型作为提示,以帮助模型生成字幕。 (2)我们特别设计一个称为双流变压器的模型,以学习边界字幕的区分表示。 (3)为了生成与内容相关和类似人类的标题,我们通过设计单词级合奏策略来提高描述质量。 GEBC测试拆分的有希望的结果证明了我们提出的模型的功效。
translated by 谷歌翻译