预测拍摄图片的国家有许多潜在的应用,例如对虚假索赔,冒名顶替者的识别,预防虚假信息运动,对假新闻的识别等等。先前的作品主要集中在拍摄图片的地理坐标的估计上。然而,从语义和法医学的角度来看,认识到已经拍摄图像的国家可能更重要,而不是确定其空间坐标。到目前为止,只有少数作品已经解决了这项任务,主要是依靠包含特征地标的图像,例如标志性的纪念碑。在上面的框架中,本文提供了两个主要贡献。首先,我们介绍了一个新的数据集,即Vippgeo数据集,其中包含近400万张图像,可用于训练DL模型进行国家分类。该数据集仅包含这种图像与国家识别的相关性,并且它是通过注意删除非显着图像(例如描绘面孔的图像或特定的非相关物体,例如飞机或船舶)来构建的。其次,我们使用数据集来训练深度学习架构,以将国家识别问题视为分类问题。我们执行的实验表明,我们的网络提供了比当前最新状态更好的结果。特别是,我们发现,要求网络直接识别该国提供比首先估算地理配位的更好的结果,然后使用它们将其追溯到拍摄图片的国家。
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
视觉地位识别(VPR)通常关注本地化室外图像。但是,本地化包含部分户外场景的室内场景对于各种应用来说可能具有很大的值。在本文中,我们介绍了内部视觉地点识别(IOVPR),一个任务,旨在通过Windows可见的户外场景本地化图像。对于此任务,我们介绍了新的大型数据集Amsterdam-XXXL,在阿姆斯特丹拍摄的图像,由640万全景街头视图图像和1000个用户生成的室内查询组成。此外,我们介绍了一个新的培训协议,内部数据增强,以适应视觉地点识别方法,以便展示内外视觉识别的潜力。我们经验展示了我们提出的数据增强方案的优势,较小的规模,同时展示了现有方法的大规模数据集的难度。通过这项新任务,我们旨在鼓励为IOVPR制定方法。数据集和代码可用于HTTPS://github.com/saibr/iovpr的研究目的
translated by 谷歌翻译
规划自行车共享站的布局是一个复杂的过程,特别是在刚刚实施自行车共享系统的城市。城市规划者通常必须根据公开可用的数据并私下提供来自管理的数据,然后使用现场流行的位置分配模型。较小城市的许多城市可能难以招聘专家进行此类规划。本文提出了一种新的解决方案来简化和促进通过使用空间嵌入方法来实现这种规划的过程。仅基于来自OpenStreetMap的公开数据,以及来自欧洲34个城市的站布局,已经开发了一种使用优步H3离散全球电网系统将城市分成微区域的方法,并指示其值得放置站的区域在不同城市使用转移学习的现有系统。工作的结果是在规划驻地布局的决策中支持规划者的机制,以选择参考城市。
translated by 谷歌翻译
由于其主观性质,美学的计算推断是一项不确定的任务。已经提出了许多数据集来通过根据人类评级提供成对的图像和美学得分来解决问题。但是,人类更好地通过语言表达自己的观点,品味和情感,而不是单个数字总结他们。实际上,照片评论提供了更丰富的信息,因为它们揭示了用户如何以及为什么对视觉刺激的美学评价。在这方面,我们提出了Reddit照片评论数据集(RPCD),其中包含图像和照片评论的元素。 RPCD由74K图像和220k评论组成,并从业余爱好者和专业摄影师使用的Reddit社区收集,以利用建设性的社区反馈来提高其摄影技巧。所提出的数据集与以前的美学数据集不同,主要是三个方面,即(i)数据集的大规模数据集和批评图像不同方面的评论的扩展,(ii)它主要包含Ultrahd映像,以及(iii)它通过自动管道收集,可以轻松地扩展到新数据。据我们所知,在这项工作中,我们提出了首次尝试估算批评的视觉刺激质量的尝试。为此,我们利用批评情绪的极性为美学判断的指标。我们证明了情感如何与可用于两种美学评估基准的美学判断正相关。最后,我们通过使用情感得分作为排名图像的目标进行了几种模型。提供数据集和基准(https://github.com/mediatechnologycenter/aestheval)。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
分析了2011年至2021年发表的88个来源,本文对基于计算机的建筑物和建筑环境进行了首次系统评价,以评估其对建筑和城市设计研究的价值。遵循多阶段的选择过程,讨论了有关建筑应用,例如建筑物分类,详细分类,定性环境分析,建筑条件调查和建筑价值估算等建筑应用程序的类型。这揭示了当前的研究差距和趋势,并突出了研究目标的两个主要类别。首先,要使用或优化计算机视觉方法进行体系结构图像数据,然后可以帮助自动化耗时,劳动密集型或复杂的视觉分析任务。其次,通过查找视觉,统计和定性数据之间的模式和关系来探索机器学习方法的方法论上的好处,以研究有关建筑环境的新问题,这可以克服传统手动分析的局限性。不断增长的研究为建筑和设计研究提供了新的方法,论文确定了未来的研究挑战和方向。
translated by 谷歌翻译
由于技术成本的降低和卫星发射的增加,卫星图像变得越来越流行和更容易获得。除了提供仁慈的目的外,还可以出于恶意原因(例如错误信息)使用卫星数据。事实上,可以依靠一般图像编辑工具来轻松操纵卫星图像。此外,随着深层神经网络(DNN)的激增,可以生成属于各种领域的现实合成图像,与合成生成的卫星图像的扩散有关的其他威胁正在出现。在本文中,我们回顾了关于卫星图像的产生和操纵的最新技术(SOTA)。特别是,我们既关注从头开始的合成卫星图像的产生,又要通过图像转移技术对卫星图像进行语义操纵,包括从一种类型的传感器到另一种传感器获得的图像的转换。我们还描述了迄今已研究的法医检测技术,以对合成图像伪造进行分类和检测。虽然我们主要集中在法医技术上明确定制的,该技术是针对AI生成的合成内容物的检测,但我们还审查了一些用于一般剪接检测的方法,这些方法原则上也可以用于发现AI操纵图像
translated by 谷歌翻译
从世界上任何地方拍摄的单个地面RGB图像预测地理位置(地理位置)是一个非常具有挑战性的问题。挑战包括由于不同的环境场景而导致的图像多样性,相同位置的出现急剧变化,具体取决于一天中的时间,天气,季节和更重要的是,该预测是由单个图像可能只有一个可能只有一个图像做出的很少有地理线索。由于这些原因,大多数现有作品仅限于特定的城市,图像或全球地标。在这项工作中,我们专注于为行星尺度单位图地理定位开发有效的解决方案。为此,我们提出了转运器,这是一个统一的双分支变压器网络,在整个图像上关注细节,并在极端的外观变化下产生健壮的特征表示。转运器将RGB图像及其语义分割图作为输入,在每个变压器层之后的两个平行分支之间进行交互,并以多任务方式同时执行地理位置定位和场景识别。我们在四个基准数据集上评估转运器-IM2GPS,IM2GPS3K,YFCC4K,YFCC26K,并获得5.5%,14.1%,4.9%,9.9%的大陆级别准确度比最新的级别的精度提高。在现实世界测试图像上还验证了转运器,发现比以前的方法更有效。
translated by 谷歌翻译
图像的美学质量被定义为图像美的度量或欣赏。美学本质上是一个主观性的财产,但是存在一些影响它的因素,例如图像的语义含量,描述艺术方面的属性,用于射击的摄影设置等。在本文中,我们提出了一种方法基于语义含量分析,艺术风格和图像的组成的图像自动预测图像的美学。所提出的网络包括:用于语义特征的预先训练的网络,提取(骨干网);依赖于骨干功能的多层的Perceptron(MLP)网络,用于预测图像属性(attributeNet);一种自适应的HyperNetwork,可利用以前编码到attributeNet生成的嵌入的属性以预测专用于美学估计的目标网络的参数(AestheticNet)。鉴于图像,所提出的多网络能够预测:风格和组成属性,以及美学分数分布。结果三个基准数据集展示了所提出的方法的有效性,而消融研究则更好地了解所提出的网络。
translated by 谷歌翻译
We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% -15% on CIFAR-10 and 11% -14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
translated by 谷歌翻译
全球城市可免费获得大量的地理参考全景图像,以及各种各样的城市物体上的位置和元数据的详细地图。它们提供了有关城市物体的潜在信息来源,但是对象检测的手动注释是昂贵,费力和困难的。我们可以利用这种多媒体来源自动注释街道级图像作为手动标签的廉价替代品吗?使用Panorams框架,我们引入了一种方法,以根据城市上下文信息自动生成全景图像的边界框注释。遵循这种方法,我们仅以快速自动的方式从开放数据源中获得了大规模的(尽管嘈杂,但都嘈杂,但对城市数据集进行了注释。该数据集涵盖了阿姆斯特丹市,其中包括771,299张全景图像中22个对象类别的1400万个嘈杂的边界框注释。对于许多对象,可以从地理空间元数据(例如建筑价值,功能和平均表面积)获得进一步的细粒度信息。这样的信息将很难(即使不是不可能)单独根据图像来获取。为了进行详细评估,我们引入了一个有效的众包协议,用于在全景图像中进行边界框注释,我们将其部署以获取147,075个地面真实对象注释,用于7,348张图像的子集,Panorams-clean数据集。对于我们的Panorams-Noisy数据集,我们对噪声以及不同类型的噪声如何影响图像分类和对象检测性能提供了广泛的分析。我们可以公开提供数据集,全景噪声和全景清洁,基准和工具。
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
街道级别图像对原位数据收集进行扩大占据了重要潜力。通过组合使用便宜的高质量相机与最近的深度学习计算解决方案的进步来实现这一点,以推导出相关专题信息。我们介绍了一个框架,用于使用计算机视觉从街道层图像中收集和提取作物类型和候选信息。在2018年生长季节期间,高清图片被捕获在荷兰弗莱洛兰省的侧视动作相机。每个月从3月到10月,调查了一个固定的200公里路线,每秒收集一张照片,结果总计40万个地理标记的图片。在220个特定的包裹物位置,记录了现场作物的观察结果,以获得17种作物。此外,时间跨度包括特定的出苗前包裹阶段,例如用于春季和夏季作物的不同栽培的裸土,以及收获后栽培实践,例如,绿色皱眉和捕捉庄稼。基于与卷积神经网络(MobileNet)的转移学习,使用具有众所周知的图像识别模型的Tensorflow进行分类。开发了一种超核解方法,以获得160型号的表现最佳模型。这种最佳模型应用于独立推理的鉴别作物类型,宏观F1分数为88.1%的宏观效果,在包裹水平的86.9%。讨论了这种方法的潜力和警告以及实施和改进的实际考虑因素。所提出的框架速度升高了高质量的原位数据收集,并通过使用计算机视觉自动分类建议大规模数据收集的途径。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
由于深度学习的进步和数据集的增加,自动许可证板识别(ALPR)系统对来自多个区域的牌照(LPS)的表现显着。对深度ALPR系统的评估通常在每个数据集内完成;因此,如果这种结果是泛化能力的可靠指标,则是可疑的。在本文中,我们提出了一种传统分配的与休假 - 单数据集实验设置,以统一地评估12个光学字符识别(OCR)模型的交叉数据集泛化,其在九个公共数据集上应用于LP识别,具有良好的品种在若干方面(例如,获取设置,图像分辨率和LP布局)。我们还介绍了一个用于端到端ALPR的公共数据集,这是第一个包含带有Mercosur LP的车辆的图像和摩托车图像数量最多的图像。实验结果揭示了传统分离协议的局限性,用于评估ALPR上下文中的方法,因为在训练和测试休假时,大多数数据集在大多数数据集中的性能显着下降。
translated by 谷歌翻译