In this paper, we formulate the problem of predicting a geolocation from free text as a sequence-to-sequence problem. Using this formulation, we obtain a geocoding model by training a T5 encoder-decoder transformer model using free text as an input and geolocation as an output. The geocoding model was trained on geo-tagged wikidump data with adaptive cell partitioning for the geolocation representation. All of the code including Rest-based application, dataset and model checkpoints used in this work are publicly available.
translated by 谷歌翻译
预训练的模型(PTM)已成为自然语言处理和计算机视觉下游任务的基本骨干。尽管通过在BAIDU地图上将通用PTM应用于与地理相关的任务中获得的最初收益,但随着时间的流逝,表现平稳。造成该平稳的主要原因之一是缺乏通用PTM中的可用地理知识。为了解决这个问题,在本文中,我们介绍了Ernie-Geol,这是一个地理和语言预培训模型,设计和开发了用于改善Baidu Maps的地理相关任务。 Ernie-Geol经过精心设计,旨在通过预先培训从包含丰富地理知识的异质图生成的大规模数据来学习地理语言的普遍表示。大规模现实数据集进行的广泛定量和定性实验证明了Ernie-Geol的优势和有效性。自2021年4月以来,Ernie-Geol已经在百度地图上部署在生产中,这显着受益于各种下游任务的性能。这表明Ernie-Geol可以作为各种与地理有关的任务的基本骨干。
translated by 谷歌翻译
可靠的图像地理定位对于若干应用来说至关重要,从社交媒体地理标记到假新闻检测。最先进的地理定位方法超越了图像从图像的地理位置估算的任务。但是,没有方法评估图像的适用性,这导致不含地理位置线索的图像的不可靠和错误的估计。在本文中,我们定义了图像定位的任务,即地理位置图像的适用性,并提出了一种选择性预测方法来解决任务。特别是,我们提出了两个新颖的选择功能,利用地理定位模型的输出概率分布来推断出不同尺度的定位。我们的选择功能与最广泛使用的选择性预测基线进行基准测试,在所有情况下都表现优于它们。通过弃权预测不可定位的图像,我们将地理位置精度从城市规模提高到70.5%,从而使当前的地理位置模型可靠地对现实世界应用。
translated by 谷歌翻译
从世界上任何地方拍摄的单个地面RGB图像预测地理位置(地理位置)是一个非常具有挑战性的问题。挑战包括由于不同的环境场景而导致的图像多样性,相同位置的出现急剧变化,具体取决于一天中的时间,天气,季节和更重要的是,该预测是由单个图像可能只有一个可能只有一个图像做出的很少有地理线索。由于这些原因,大多数现有作品仅限于特定的城市,图像或全球地标。在这项工作中,我们专注于为行星尺度单位图地理定位开发有效的解决方案。为此,我们提出了转运器,这是一个统一的双分支变压器网络,在整个图像上关注细节,并在极端的外观变化下产生健壮的特征表示。转运器将RGB图像及其语义分割图作为输入,在每个变压器层之后的两个平行分支之间进行交互,并以多任务方式同时执行地理位置定位和场景识别。我们在四个基准数据集上评估转运器-IM2GPS,IM2GPS3K,YFCC4K,YFCC26K,并获得5.5%,14.1%,4.9%,9.9%的大陆级别准确度比最新的级别的精度提高。在现实世界测试图像上还验证了转运器,发现比以前的方法更有效。
translated by 谷歌翻译
Crop type maps are critical for tracking agricultural land use and estimating crop production. Remote sensing has proven an efficient and reliable tool for creating these maps in regions with abundant ground labels for model training, yet these labels remain difficult to obtain in many regions and years. NASA's Global Ecosystem Dynamics Investigation (GEDI) spaceborne lidar instrument, originally designed for forest monitoring, has shown promise for distinguishing tall and short crops. In the current study, we leverage GEDI to develop wall-to-wall maps of short vs tall crops on a global scale at 10 m resolution for 2019-2021. Specifically, we show that (1) GEDI returns can reliably be classified into tall and short crops after removing shots with extreme view angles or topographic slope, (2) the frequency of tall crops over time can be used to identify months when tall crops are at their peak height, and (3) GEDI shots in these months can then be used to train random forest models that use Sentinel-2 time series to accurately predict short vs. tall crops. Independent reference data from around the world are then used to evaluate these GEDI-S2 maps. We find that GEDI-S2 performed nearly as well as models trained on thousands of local reference training points, with accuracies of at least 87% and often above 90% throughout the Americas, Europe, and East Asia. Systematic underestimation of tall crop area was observed in regions where crops frequently exhibit low biomass, namely Africa and South Asia, and further work is needed in these systems. Although the GEDI-S2 approach only differentiates tall from short crops, in many landscapes this distinction goes a long way toward mapping the main individual crop types. The combination of GEDI and Sentinel-2 thus presents a very promising path towards global crop mapping with minimal reliance on ground data.
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
Due to the usefulness in data enrichment for data analysis tasks, joinable table discovery has become an important operation in data lake management. Existing approaches target equi-joins, the most common way of combining tables for creating a unified view, or semantic joins, which tolerate misspellings and different formats to deliver more join results. They are either exact solutions whose running time is linear in the sizes of query column and target table repository or approximate solutions lacking precision. In this paper, we propose Deepjoin, a deep learning model for accurate and efficient joinable table discovery. Our solution is an embedding-based retrieval, which employs a pre-trained language model (PLM) and is designed as one framework serving both equi- and semantic joins. We propose a set of contextualization options to transform column contents to a text sequence. The PLM reads the sequence and is fine-tuned to embed columns to vectors such that columns are expected to be joinable if they are close to each other in the vector space. Since the output of the PLM is fixed in length, the subsequent search procedure becomes independent of the column size. With a state-of-the-art approximate nearest neighbor search algorithm, the search time is logarithmic in the repository size. To train the model, we devise the techniques for preparing training data as well as data augmentation. The experiments on real datasets demonstrate that by training on a small subset of a corpus, Deepjoin generalizes to large datasets and its precision consistently outperforms other approximate solutions'. Deepjoin is even more accurate than an exact solution to semantic joins when evaluated with labels from experts. Moreover, when equipped with a GPU, Deepjoin is up to two orders of magnitude faster than existing solutions.
translated by 谷歌翻译
Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.
translated by 谷歌翻译
我们表明,基于补丁的模型,例如展示,可以对使用深卷积神经网络的语义分割和标签超分辨率的最新状态具有卓越的性能。我们推导出一种新的培训算法,其允许从非常大的数据集中学习并从拓扑表征中推导出标签超分辨率算法作为统计推理算法。我们说明了我们在陆地覆盖映射和医学图像分析任务的方法。
translated by 谷歌翻译
“轨迹”是指由地理空间中的移动物体产生的迹线,通常由一系列按时间顺序排列的点表示,其中每个点由地理空间坐标集和时间戳组成。位置感应和无线通信技术的快速进步使我们能够收集和存储大量的轨迹数据。因此,许多研究人员使用轨迹数据来分析各种移动物体的移动性。在本文中,我们专注于“城市车辆轨迹”,这是指城市交通网络中车辆的轨迹,我们专注于“城市车辆轨迹分析”。城市车辆轨迹分析提供了前所未有的机会,可以了解城市交通网络中的车辆运动模式,包括以用户为中心的旅行经验和系统范围的时空模式。城市车辆轨迹数据的时空特征在结构上相互关联,因此,许多先前的研究人员使用了各种方法来理解这种结构。特别是,由于其强大的函数近似和特征表示能力,深度学习模型是由于许多研究人员的注意。因此,本文的目的是开发基于深度学习的城市车辆轨迹分析模型,以更好地了解城市交通网络的移动模式。特别是,本文重点介绍了两项研究主题,具有很高的必要性,重要性和适用性:下一个位置预测,以及合成轨迹生成。在这项研究中,我们向城市车辆轨迹分析提供了各种新型模型,使用深度学习。
translated by 谷歌翻译
单图像姿势估计是许多视觉和机器人任务中的一个基本问题,并且现有的深度学习方法不会完全建模和处理来遭受:i)关于预测的不确定性,ii)具有多个(有时是无限)正确姿势的对称对象。为此,我们引入了一种在SO(3)上估算任意非参数分布的方法。我们的关键思想是通过神经网络隐含地表示分布,该神经网络估计给定输入图像和候选姿势的概率。网格采样或梯度上升可用于找到最有可能的姿势,但也可以评估任何姿势的概率,从而实现关于对称性和不确定性的推理。这是代表流形分布的最通用方法,为了展示丰富的表现力,我们介绍了一个具有挑战性的对称和几乎对称对象的数据集。我们不需要对姿势不确定性的监督 - 模型仅以一个示例训练单个姿势。但是,我们的隐式模型具有高度表达能力在3D姿势上处理复杂的分布,同时仍然在标准的非歧义环境上获得准确的姿势估计,从而在Pascal3d+和ModelNet10-SO-SO(3)基准方面实现了最先进的性能。
translated by 谷歌翻译
Obtaining a dynamic population distribution is key to many decision-making processes such as urban planning, disaster management and most importantly helping the government to better allocate socio-technical supply. For the aspiration of these objectives, good population data is essential. The traditional method of collecting population data through the census is expensive and tedious. In recent years, statistical and machine learning methods have been developed to estimate population distribution. Most of the methods use data sets that are either developed on a small scale or not publicly available yet. Thus, the development and evaluation of new methods become challenging. We fill this gap by providing a comprehensive data set for population estimation in 98 European cities. The data set comprises a digital elevation model, local climate zone, land use proportions, nighttime lights in combination with multi-spectral Sentinel-2 imagery, and data from the Open Street Map initiative. We anticipate that it would be a valuable addition to the research community for the development of sophisticated approaches in the field of population estimation.
translated by 谷歌翻译
We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5% in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation_via_guidebook_grounding.
translated by 谷歌翻译
Out-of-distribution generalization (OODG) is a longstanding challenge for neural networks. This challenge is quite apparent in tasks with well-defined variables and rules, where explicit use of the rules could solve problems independently of the particular values of the variables, but networks tend to be tied to the range of values sampled in their training data. Large transformer-based language models have pushed the boundaries on how well neural networks can solve previously unseen problems, but their complexity and lack of clarity about the relevant content in their training data obfuscates how they achieve such robustness. As a step toward understanding how transformer-based systems generalize, we explore the question of OODG in small scale transformers trained with examples from a known distribution. Using a reasoning task based on the puzzle Sudoku, we show that OODG can occur on a complex problem if the training set includes examples sampled from the whole distribution of simpler component tasks. Successful generalization depends on carefully managing positional alignment when absolute position encoding is used, but we find that suppressing sensitivity to absolute positions overcomes this limitation. Taken together our results represent a small step toward understanding and promoting systematic generalization in transformers.
translated by 谷歌翻译
这项工作提出了一个基于注意力的序列到序列模型,用于手写单词识别,并探讨了用于HTR系统数据有效培训的转移学习。为了克服培训数据稀缺性,这项工作利用了在场景文本图像上预先训练的模型,作为调整手写识别模型的起点。Resnet特征提取和基于双向LSTM的序列建模阶段一起形成编码器。预测阶段由解码器和基于内容的注意机制组成。拟议的端到端HTR系统的有效性已在新型的多作用数据集IMGUR5K和IAM数据集上进行了经验评估。实验结果评估了HTR框架的性能,并通过对误差案例的深入分析进一步支持。源代码和预培训模型可在https://github.com/dmitrijsk/attentionhtr上找到。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
规划自行车共享站的布局是一个复杂的过程,特别是在刚刚实施自行车共享系统的城市。城市规划者通常必须根据公开可用的数据并私下提供来自管理的数据,然后使用现场流行的位置分配模型。较小城市的许多城市可能难以招聘专家进行此类规划。本文提出了一种新的解决方案来简化和促进通过使用空间嵌入方法来实现这种规划的过程。仅基于来自OpenStreetMap的公开数据,以及来自欧洲34个城市的站布局,已经开发了一种使用优步H3离散全球电网系统将城市分成微区域的方法,并指示其值得放置站的区域在不同城市使用转移学习的现有系统。工作的结果是在规划驻地布局的决策中支持规划者的机制,以选择参考城市。
translated by 谷歌翻译
Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources -such as text data -both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18% across thousands of novel labels never seen by the visual model.
translated by 谷歌翻译
Industries must follow government rules and regulations around the world to classify products when assessing duties and taxes for international shipment. Harmonized System (HS) is the most standardized numerical method of classifying traded products among industry classification systems. A hierarchical ensemble model comprising of Bert- transformer, NER, distance-based approaches, and knowledge-graphs have been developed to address scalability, coverage, ability to capture nuances, automation and auditing requirements when classifying unknown text-descriptions as per HS method.
translated by 谷歌翻译
不断增加的材料科学文章使得很难从已发表的文献中推断化学结构 - 培训关系。我们使用自然语言处理(NLP)方法从聚合物文献的摘要中自动提取材料属性数据。作为我们管道的组成部分,我们使用240万材料科学摘要培训了一种语言模型的材料,该材料模型在用作文本编码器时,在五分之三命名实体识别数据集中的其他基线模型都优于其他基线模型。使用此管道,我们在60小时内从约130,000个摘要中获得了约300,000个物质记录。分析了提取的数据,分析了各种应用,例如燃料电池,超级电容器和聚合物太阳能电池,以恢复非平凡的见解。通过我们的管道提取的数据可通过https://polymerscholar.org的Web平台提供,该数据可方便地定位摘要中记录的材料属性数据。这项工作证明了自动管道的可行性,该管道从已发布的文献开始,并以一组完整的提取物质属性信息结束。
translated by 谷歌翻译