节流是当今在线广告市场中最受欢迎的预算控制方法之一。当一个受预算受限的广告商雇用节流功能时,她可以在广告平台建议出价后选择是否参加拍卖。本文重点介绍了从理论观点重复的第二价格拍卖中的动态预算节流过程。潜在问题的一个重要特征是,广告商不知道进入市场时竞争最高的出价。为了模拟消除这种不确定性的困难,我们考虑了两种不同的信息结构。广告商可以通过全信息反馈获得每轮竞争最高的投标。同时,通过部分信息反馈,广告商只能在她参加的拍卖中获得最高竞争的出价。我们提出了OGD-CB算法,该算法涉及在线广告查询面临的同时分配学习和收入优化。在这两种情况下,我们都证明该算法保证了$ O(\ sqrt {t \ log t})$遗憾,概率$ 1- o(1/t)$相对于流体自适应节流基准。通过证明$ \ omega(\ sqrt {t})$的下限在最小的后悔中,即使是最佳的最佳选择,我们就建立了算法的近乎最佳性。最后,我们将节流的最佳流体最佳与起搏相提并论,这是另一种广泛采用的预算控制方法。这些基准的数值关系使我们对不同的在线算法进行预算管理的比较有了进一步的见解。
translated by 谷歌翻译
视觉任务中变形金刚的兴起不仅可以推进网络骨干设计,而且还启动了一个全新的页面,以实现端到端的图像识别(例如,对象检测和泛型分段)。源自自然语言处理(NLP)的变压器体系结构,包括自我注意力和交叉注意力,有效地学习了序列中元素之间的远距离相互作用。但是,我们观察到,大多数现有的基于变压器的视觉模型只是从NLP中借用了这个想法,忽略了语言和图像之间的关键差异,尤其是空间扁平的像素特征的极高序列长度。随后,这阻碍了像素特征和对象查询之间的交叉注意力学习。在本文中,我们重新考虑像素和对象查询之间的关系,并建议将交叉注意学习作为一个聚类过程进行重新重新制定。受传统K-均值聚类算法的启发,我们开发了K-Means面膜Xformer(Kmax-Deeplab)进行细分任务,这不仅可以改善最先进的艺术品,而且享有简单而优雅的设计。结果,我们的Kmax-Deeplab在Coco Val设置上以58.0%的PQ实现了新的最先进的性能,而CityScapes Val设置为68.4%PQ,44.0%AP和83.5%MIOU,而无需测试时间增加或外部数据集。我们希望我们的工作能够阐明设计为视觉任务量身定制的变压器。代码和型号可在https://github.com/google-research/deeplab2上找到
translated by 谷歌翻译
我们提出了聚类蒙版变压器(CMT-DeepLab),这是一种基于变压器的框架,用于围绕聚类设计的泛型分割。它重新考虑了用于分割和检测的现有变压器架构;CMT-DeepLab认为对象查询是群集中心,该中心填充了应用于分割时将像素分组的作用。群集通过交替的过程计算,首先通过其功能亲和力将像素分配给簇,然后更新集群中心和像素功能。这些操作共同包含聚类蒙版变压器(CMT)层,该层产生了越野器的交叉注意,并且与最终的分割任务更加一致。CMT-DeepLab在可可Test-DEV集中实现了55.7%的PQ的新最先进的PQ,可显着提高先前ART的性能。
translated by 谷歌翻译
Panoptic图像分割是计算机视觉任务,即在图像中查找像素组并为其分配语义类别和对象实例标识符。由于其在机器人技术和自动驾驶中的关键应用,图像细分的研究变得越来越流行。因此,研究社区依靠公开可用的基准数据集来推进计算机视觉中的最新技术。但是,由于将图像标记为高昂的成本,因此缺乏适合全景分割的公开地面真相标签。高标签成本还使得将现有数据集扩展到视频域和多相机设置是一项挑战。因此,我们介绍了Waymo Open DataSet:全景视频全景分割数据集,这是一个大型数据集,它提供了用于自主驾驶的高质量的全景分割标签。我们使用公开的Waymo打开数据集生成数据集,利用各种相机图像集。随着时间的推移,我们的标签是一致的,用于视频处理,并且在车辆上安装的多个摄像头保持一致,以了解全景的理解。具体而言,我们为28个语义类别和2,860个时间序列提供标签,这些标签由在三个不同地理位置驾驶的自动驾驶汽车上安装的五个摄像机捕获,从而导致总共标记为100k标记的相机图像。据我们所知,这使我们的数据集比现有的数据集大量数据集大的数量级。我们进一步提出了一个新的基准,用于全景视频全景分割,并根据DeepLab模型家族建立许多强大的基准。我们将公开制作基准和代码。在https://waymo.com/open上找到数据集。
translated by 谷歌翻译
为视频中的每个像素分配语义类和跟踪身份的任务称为视频Panoptic分段。我们的工作是第一个在真实世界中瞄准这项任务,需要在空间和时间域中的密集解释。由于此任务的地面真理难以获得,但是,现有数据集是合成构造的或仅在短视频剪辑中稀疏地注释。为了克服这一点,我们介绍了一个包含两个数据集,Kitti-Step和Motchallenge步骤的新基准。数据集包含长视频序列,提供具有挑战性的示例和用于研究长期像素精确分割和在真实条件下跟踪的测试床。我们进一步提出了一种新的评估度量分割和跟踪质量(STQ),其相当余额平衡该任务的语义和跟踪方面,并且更适合评估任意长度的序列。最后,我们提供了几个基线来评估此新具有挑战性数据集的现有方法的状态。我们已将我们的数据集,公制,基准服务器和基准公开提供,并希望这将激发未来的研究。
translated by 谷歌翻译
本文关注的是将许多预训练的深神经网络(DNN)(称为检查点)排名,以将学习转移到下游任务。由于广泛使用了DNN,我们可能很容易从各种来源收集数百个检查站。他们中的哪个将最好的人转移到我们感兴趣的下游任务?为了彻底回答这个问题,我们建立了一个神经检查点排名基准(Neucrab),并研究一些直观的排名措施。这些措施是通用的,适用于不同输出类型的检查点,而无需知道如何对哪个数据集进行检查。它们还产生了低计算成本,使它们实际上有意义。我们的结果表明,检查点提取的特征的线性可分离性是可传递性的强烈指标。我们还达到了一种新的排名NLEEP,这在实验中带来了最佳性能。
translated by 谷歌翻译
In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast system for panoptic segmentation, aiming to establish a solid baseline for bottom-up methods that can achieve comparable performance of two-stage methods while yielding fast inference speed. In particular, Panoptic-DeepLab adopts the dual-ASPP and dual-decoder structures specific to semantic, and instance segmentation, respectively. The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regression. As a result, our single Panoptic-DeepLab simultaneously ranks first at all three Cityscapes benchmarks, setting the new state-of-art of 84.2% mIoU, 39.0% AP, and 65.5% PQ on test set. Additionally, equipped with MobileNetV3, Panoptic-DeepLab runs nearly in real-time with a single 1025 × 2049 image (15.8 frames per second), while achieving a competitive performance on Cityscapes (54.1 PQ% on test set). On Mapillary Vistas test set, our ensemble of six models attains 42.7% PQ, outperforming the challenge winner in 2018 by a healthy margin of 1.5%. Finally, our Panoptic-DeepLab also performs on par with several topdown approaches on the challenging COCO dataset. For the first time, we demonstrate a bottom-up approach could deliver state-of-the-art results on panoptic segmentation.
translated by 谷歌翻译
We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardwareaware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2% more accurate on ImageNet classification while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small is 6.6% more accurate compared to a MobileNetV2 model with comparable latency. MobileNetV3-Large detection is over 25% faster at roughly the same accuracy as Mo-bileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.
translated by 谷歌翻译
Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.
translated by 谷歌翻译
Figure 1: Our MovieQA dataset contains 14,944 questions about 408 movies. It contains multiple sources of information: plots, subtitles, video clips, scripts, and DVS transcriptions. In this figure we show example QAs from The Matrix and localize them in the timeline.
translated by 谷歌翻译