可靠的图像地理定位对于若干应用来说至关重要,从社交媒体地理标记到假新闻检测。最先进的地理定位方法超越了图像从图像的地理位置估算的任务。但是,没有方法评估图像的适用性,这导致不含地理位置线索的图像的不可靠和错误的估计。在本文中,我们定义了图像定位的任务,即地理位置图像的适用性,并提出了一种选择性预测方法来解决任务。特别是,我们提出了两个新颖的选择功能,利用地理定位模型的输出概率分布来推断出不同尺度的定位。我们的选择功能与最广泛使用的选择性预测基线进行基准测试,在所有情况下都表现优于它们。通过弃权预测不可定位的图像,我们将地理位置精度从城市规模提高到70.5%,从而使当前的地理位置模型可靠地对现实世界应用。
translated by 谷歌翻译
从世界上任何地方拍摄的单个地面RGB图像预测地理位置(地理位置)是一个非常具有挑战性的问题。挑战包括由于不同的环境场景而导致的图像多样性,相同位置的出现急剧变化,具体取决于一天中的时间,天气,季节和更重要的是,该预测是由单个图像可能只有一个可能只有一个图像做出的很少有地理线索。由于这些原因,大多数现有作品仅限于特定的城市,图像或全球地标。在这项工作中,我们专注于为行星尺度单位图地理定位开发有效的解决方案。为此,我们提出了转运器,这是一个统一的双分支变压器网络,在整个图像上关注细节,并在极端的外观变化下产生健壮的特征表示。转运器将RGB图像及其语义分割图作为输入,在每个变压器层之后的两个平行分支之间进行交互,并以多任务方式同时执行地理位置定位和场景识别。我们在四个基准数据集上评估转运器-IM2GPS,IM2GPS3K,YFCC4K,YFCC26K,并获得5.5%,14.1%,4.9%,9.9%的大陆级别准确度比最新的级别的精度提高。在现实世界测试图像上还验证了转运器,发现比以前的方法更有效。
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
我们解决了选择性分类的问题,目标是在数据集的所需覆盖范围内实现最佳性能。最新的最新选择性方法通过引入单独的选择头或额外的弃权logit进行体系结构变化。在本文中,我们通过确认最先进的方法的出色性能归功于培训更具概括的分类器,为选择性分类提供了令人惊讶的结果。但是,他们的选择机制是次优的。我们认为,选择机制应植根于目标函数,而不是单独计算的分数。因此,在本文中,我们激发了一种基于分类设置的横向熵损失的替代选择策略,即logits的最大值。我们提出的选择策略在所有覆盖范围和所有数据集中都可以通过大幅度的边距获得更好的结果,而无需任何其他计算。最后,受到我们优越的选择机制的启发,我们建议通过熵最小化进一步正规化目标函数。我们提出的具有修改后损耗功能的最大选择选择可实现选择性分类的新最新结果。
translated by 谷歌翻译
机器学习已经急剧提高,在多模式任务中缩小了人类的准确性差距,例如视觉问题答案(VQA)。但是,尽管人类在不确定的时候可以说“我不知道”(即避免回答问题),但这种能力在多模式研究中被大大忽略了,尽管此问题对VQA的使用很重要,而VQA实际上使用了VQA。设置。在这项工作中,我们为可靠的VQA提出了一个问题制定,我们更喜欢弃权,而不是提供错误的答案。我们首先为多种VQA模型提供了弃戒功能,并分析了它们的覆盖范围,回答的问题的一部分和风险,该部分的错误。为此,我们探索了几种弃权方法。我们发现,尽管最佳性能模型在VQA V2数据集上实现了超过71%的准确性,但通过直接使用模型的SoftMax得分介绍了弃权的选项,限制了它们的少于8%的问题,以达到错误的错误风险(即1%)。这促使我们利用多模式选择功能直接估计预测答案的正确性,我们显示的可以将覆盖率增加,例如,在1%风险下,2.4倍从6.8%到16.3%。尽管分析覆盖范围和风险很重要,但这些指标具有权衡,这使得比较VQA模型具有挑战性。为了解决这个问题,我们还建议对VQA的有效可靠性指标,与弃权相比,将不正确的答案的成本更大。 VQA的这种新问题制定,度量和分析为构建有效和可靠的VQA模型提供了基础,这些模型具有自我意识,并且只有当他们不知道答案时才戒除。
translated by 谷歌翻译
Machine learning models are frequently employed to perform either purely physics-free or hybrid downscaling of climate data. However, the majority of these implementations operate over relatively small downscaling factors of about 4--6x. This study examines the ability of convolutional neural networks (CNN) to downscale surface wind speed data from three different coarse resolutions (25km, 48km, and 100km side-length grid cells) to 3km and additionally focuses on the ability to recover subgrid-scale variability. Within each downscaling factor, namely 8x, 16x, and 32x, we consider models that produce fine-scale wind speed predictions as functions of different input features: coarse wind fields only; coarse wind and fine-scale topography; and coarse wind, topography, and temporal information in the form of a timestamp. Furthermore, we train one model at 25km to 3km resolution whose fine-scale outputs are probability density function parameters through which sample wind speeds can be generated. All CNN predictions performed on one out-of-sample data outperform classical interpolation. Models with coarse wind and fine topography are shown to exhibit the best performance compared to other models operating across the same downscaling factor. Our timestamp encoding results in lower out-of-sample generalizability compared to other input configurations. Overall, the downscaling factor plays the largest role in model performance.
translated by 谷歌翻译
In this paper, we formulate the problem of predicting a geolocation from free text as a sequence-to-sequence problem. Using this formulation, we obtain a geocoding model by training a T5 encoder-decoder transformer model using free text as an input and geolocation as an output. The geocoding model was trained on geo-tagged wikidump data with adaptive cell partitioning for the geolocation representation. All of the code including Rest-based application, dataset and model checkpoints used in this work are publicly available.
translated by 谷歌翻译
Selective classification involves identifying the subset of test samples that a model can classify with high accuracy, and is important for applications such as automated medical diagnosis. We argue that this capability of identifying uncertain samples is valuable for training classifiers as well, with the aim of building more accurate classifiers. We unify these dual roles by training a single auxiliary meta-network to output an importance weight as a function of the instance. This measure is used at train time to reweight training data, and at test-time to rank test instances for selective classification. A second, key component of our proposal is the meta-objective of minimizing dropout variance (the variance of classifier output when subjected to random weight dropout) for training the metanetwork. We train the classifier together with its metanetwork using a nested objective of minimizing classifier loss on training data and meta-loss on a separate meta-training dataset. We outperform current state-of-the-art on selective classification by substantial margins--for instance, upto 1.9% AUC and 2% accuracy on a real-world diabetic retinopathy dataset. Finally, our meta-learning framework extends naturally to unsupervised domain adaptation, given our unsupervised variance minimization meta-objective. We show cumulative absolute gains of 3.4% / 3.3% accuracy and AUC over the other baselines in domain shift settings on the Retinopathy dataset using unsupervised domain adaptation.
translated by 谷歌翻译
我们介绍了一种名为RobustAbnet的新表检测和结构识别方法,以检测表的边界并从异质文档图像中重建每个表的细胞结构。为了进行表检测,我们建议将Cornernet用作新的区域建议网络来生成更高质量的表建议,以更快的R-CNN,这显着提高了更快的R-CNN的定位准确性以进行表检测。因此,我们的表检测方法仅使用轻巧的RESNET-18骨干网络,在三个公共表检测基准(即CTDAR TRACKA,PUBLAYNET和IIIT-AR-13K)上实现最新性能。此外,我们提出了一种新的基于分裂和合并的表结构识别方法,其中提出了一个新型的基于CNN的新空间CNN分离线预测模块将每个检测到的表分为单元格,并且基于网格CNN的CNN合并模块是应用用于恢复生成细胞。由于空间CNN模块可以有效地在整个表图像上传播上下文信息,因此我们的表结构识别器可以坚固地识别具有较大的空白空间和几何扭曲(甚至弯曲)表的表。得益于这两种技术,我们的表结构识别方法在包括SCITSR,PubTabnet和CTDAR TrackB2-Modern在内的三个公共基准上实现了最先进的性能。此外,我们进一步证明了我们方法在识别具有复杂结构,大空间以及几何扭曲甚至弯曲形状的表上的表格上的优势。
translated by 谷歌翻译
Fine-grained population maps are needed in several domains, like urban planning, environmental monitoring, public health, and humanitarian operations. Unfortunately, in many countries only aggregate census counts over large spatial units are collected, moreover, these are not always up-to-date. We present POMELO, a deep learning model that employs coarse census counts and open geodata to estimate fine-grained population maps with 100m ground sampling distance. Moreover, the model can also estimate population numbers when no census counts at all are available, by generalizing across countries. In a series of experiments for several countries in sub-Saharan Africa, the maps produced with POMELOare in good agreement with the most detailed available reference counts: disaggregation of coarse census counts reaches R2 values of 85-89%; unconstrained prediction in the absence of any counts reaches 48-69%.
translated by 谷歌翻译
自动化的HyperParameter优化(HPO)可以支持从业者在机器学习模型中获得峰值性能。然而,通常缺乏有价值的见解,以对不同的超参数对最终模型性能的影响。这种缺乏可解释性使得难以信任并理解自动化的HPO过程及其结果。我们建议使用可解释的机器学习(IML)从HPO中获得的实验数据与贝叶斯优化(BO)一起获得见解。 BO倾向于专注于具有潜在高性能配置的有前途的区域,从而诱导采样偏差。因此,许多IML技术,例如部分依赖曲线(PDP),承载产生偏置解释的风险。通过利用BO代理模型的后部不确定性,我们引入了具有估计置信带的PDP的变种。我们建议分区Quand参数空间以获得相关子区域的更自信和可靠的PDP。在一个实验研究中,我们为子区域内PDP的质量提高提供了定量证据。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
Accurate uncertainty quantification is necessary to enhance the reliability of deep learning models in real-world applications. In the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of deep learning models. Such PIs are useful or "high-quality'' as long as they are sufficiently narrow and capture most of the probability density. In this paper, we present a method to learn prediction intervals for regression-based neural networks automatically in addition to the conventional target predictions. In particular, we train two companion neural networks: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI. Our main contribution is the design of a loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean prediction interval width and ensuring the PI integrity using constraints that maximize the prediction interval probability coverage implicitly. Both objectives are balanced within the loss function using a self-adaptive coefficient. Furthermore, we apply a Monte Carlo-based approach that evaluates the model uncertainty in the learned PIs. Experiments using a synthetic dataset, six benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neural-network-based methods.
translated by 谷歌翻译
选择性分类是拒绝模型将通过输入空间覆盖范围和模型准确性之间的权衡进行不正确预测的输入的任务。选择性分类的当前方法对模型架构或损耗函数施加约束;这在实践中抑制了它们的用法。与先前的工作相反,我们表明,只能通过研究模型的(离散)训练动力来实现最新的选择性分类性能。我们提出了一个通用框架,该框架对于给定的测试输入,监视指标,该指标与训练过程中获得的中间模型相对于最终预测标签的分歧;然后,我们拒绝在培训后期阶段表现出太多分歧的数据点。特别是,我们实例化了一种方法,该方法可以跟踪何时预测训练期间的标签停止与最终预测标签的意见。我们的实验评估表明,我们的方法在典型的选择性分类基准上实现了最先进的准确性/覆盖范围。
translated by 谷歌翻译
规划自行车共享站的布局是一个复杂的过程,特别是在刚刚实施自行车共享系统的城市。城市规划者通常必须根据公开可用的数据并私下提供来自管理的数据,然后使用现场流行的位置分配模型。较小城市的许多城市可能难以招聘专家进行此类规划。本文提出了一种新的解决方案来简化和促进通过使用空间嵌入方法来实现这种规划的过程。仅基于来自OpenStreetMap的公开数据,以及来自欧洲34个城市的站布局,已经开发了一种使用优步H3离散全球电网系统将城市分成微区域的方法,并指示其值得放置站的区域在不同城市使用转移学习的现有系统。工作的结果是在规划驻地布局的决策中支持规划者的机制,以选择参考城市。
translated by 谷歌翻译
We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5% in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation_via_guidebook_grounding.
translated by 谷歌翻译
在车辆场景中的毫米波链路的光束选择是一个具有挑战性的问题,因为所有候选光束对之间的详尽搜索都不能在短接触时间内被确认完成。我们通过利用像LIDAR,相机图像和GPS等传感器收集的多模级数据来解决这一问题。我们提出了可以在本地以及移动边缘计算中心(MEC)本地执行的个人方式和分布式融合的深度学习(F-DL)架构,并研究相关权衡。我们还制定和解决优化问题,以考虑实际的光束搜索,MEC处理和传感器到MEC数据传送延迟开销,用于确定上述F-DL架构的输出尺寸。在公开的合成和本土现实世界数据集上进行的广泛评估结果分别在古典RF光束上释放出95%和96%的束选择速度提高。在预测前10个最佳光束对中,F-DL还优于最先进的技术20-22%。
translated by 谷歌翻译
最先进的语义或实例分割深度神经网络(DNN)通常在封闭的语义类上培训。因此,它们的装备不适用于处理以前的未持续的对象。然而,检测和定位这些物体对于安全关键应用至关重要,例如对自动驾驶的感知,特别是如果它们出现在前方的道路上。虽然某些方法已经解决了异常或分发的对象分割的任务,但由于缺乏固体基准,在很大程度上存在进展仍然缓慢;现有数据集由合成数据组成,或遭受标签不一致。在本文中,我们通过介绍“SegmentMeifyOUCAN”基准来弥合这个差距。我们的基准解决了两个任务:异常对象分割,这将考虑任何以前的未持续的对象类别;和道路障碍分割,它侧重于道路上的任何物体,可能是已知的或未知的。我们将两个相应的数据集与执行深入方法分析的测试套件一起提供,考虑到已建立的像素 - 明智的性能度量和最近的组件 - 明智的,这对对象尺寸不敏感。我们凭经验评估了多种最先进的基线方法,包括使用我们的测试套件在我们的数据集和公共数据上专门为异常/障碍分割而设计的多种型号。异常和障碍分割结果表明,我们的数据集有助于数据景观的多样性和难度。
translated by 谷歌翻译
We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
translated by 谷歌翻译
机器学习(ML)近年来往往应用于太空天气(SW)问题。 SW起源于太阳能扰动,包括由此产生的复杂变化,它们导致太阳和地球之间的系统。这些系统紧密耦合并不太了解。这为熟练的模型创造了具有关于他们预测的信心的知识。这种动态系统的一个例子是热层,地球上层大气的中性区域。我们无法预测其在低地球轨道中对象的卫星拖拽和碰撞操作的背景下具有严重的影响。即使使用(假设)完美的驾驶员预测,我们对系统的不完全知识也会导致往往是不准确的中性质量密度预测。正在进行持续努力来提高模型准确性,但密度模型很少提供不确定性的估计。在这项工作中,我们提出了两种技术来开发非线性ML模型以预测热散,同时提供校准的不确定性估计:蒙特卡罗(MC)丢失和直接预测概率分布,既使用预测密度(NLPD)损耗函数的负对数。我们展示了在本地和全局数据集上培训的模型的性能。这表明NLPD为这两种技术提供了类似的结果,但是直接概率方法具有更低的计算成本。对于在集合HASDM密度数据库上回归的全局模型,我们在具有良好校准的不确定性估计的独立测试数据上实现11%的错误。使用原位校准密度数据集,这两种技术都提供了13%的测试误差。 CHAMP模型(独立数据)占测试所有预测间隔的完美校准的2%。该模型也可用于获得具有给定时期的不确定性的全局预测。
translated by 谷歌翻译