Recently, over-height vehicle strike frequently occurs, causing great economic cost and serious safety problems. Hence, an alert system which can accurately discover any possible height limiting devices in advance is necessary to be employed in modern large or medium sized cars, such as touring cars. Detecting and estimating the height limiting devices act as the key point of a successful height limit alert system. Though there are some works research height limit estimation, existing methods are either too computational expensive or not accurate enough. In this paper, we propose a novel stereo-based pipeline named SHLE for height limit estimation. Our SHLE pipeline consists of two stages. In stage 1, a novel devices detection and tracking scheme is introduced, which accurately locate the height limit devices in the left or right image. Then, in stage 2, the depth is temporally measured, extracted and filtered to calculate the height limit device. To benchmark the height limit estimation task, we build a large-scale dataset named "Disparity Height", where stereo images, pre-computed disparities and ground-truth height limit annotations are provided. We conducted extensive experiments on "Disparity Height" and the results show that SHLE achieves an average error below than 10cm though the car is 70m away from the devices. Our method also outperforms all compared baselines and achieves state-of-the-art performance. Code is available at https://github.com/Yang-Kaixing/SHLE.
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
Learning an explainable classifier often results in low accuracy model or ends up with a huge rule set, while learning a deep model is usually more capable of handling noisy data at scale, but with the cost of hard to explain the result and weak at generalization. To mitigate this gap, we propose an end-to-end deep explainable learning approach that combines the advantage of deep model in noise handling and expert rule-based interpretability. Specifically, we propose to learn a deep data assessing model which models the data as a graph to represent the correlations among different observations, whose output will be used to extract key data features. The key features are then fed into a rule network constructed following predefined noisy expert rules with trainable parameters. As these models are correlated, we propose an end-to-end training framework, utilizing the rule classification loss to optimize the rule learning model and data assessing model at the same time. As the rule-based computation is none-differentiable, we propose a gradient linking search module to carry the gradient information from the rule learning model to the data assessing model. The proposed method is tested in an industry production system, showing comparable prediction accuracy, much higher generalization stability and better interpretability when compared with a decent deep ensemble baseline, and shows much better fitting power than pure rule-based approach.
translated by 谷歌翻译
最近,培训预培训方法在以任务为导向的对话框(TOD)系统中表现出了很大的成功。但是,大多数现有的预培训模型用于TOD专注于对话的理解或对话生成,但并非两者兼而有之。在本文中,我们提出了Space-3,这是一种新型的统一的半监督预培训的预训练的对话模型,从大规模对话CORPORA中学习有限的注释,可以有效地对广泛的下游对话任务进行微调。具体而言,Space-3由单个变压器中的四个连续组件组成,以维护TOD系统中的任务流:(i)对话框编码模块编码对话框历史记录,(ii)对话框理解模块以从任一用户中提取语义向量查询或系统响应,(iii)一个对话框策略模块,以生成包含响应高级语义的策略向量,以及(iv)对话框生成模块以产生适当的响应。我们为每个组件设计一个专门的预训练目标。具体而言,我们预先培训对话框编码模块,使用跨度掩码语言建模,以学习上下文化对话框信息。为了捕获“结构化对话框”语义,我们通过额外的对话注释通过新颖的树诱导的半监视对比度学习目标来预先培训对话框理解模块。此外,我们通过将其输出策略向量与响应响应的语义向量之间的L2距离最小化以进行策略优化,从而预先培训对话策略模块。最后,对话框生成模型由语言建模预先训练。结果表明,Space-3在八个下游对话框基准中实现最新性能,包括意图预测,对话框状态跟踪和端到端对话框建模。我们还表明,在低资源设置下,Space-3比现有模型具有更强的射击能力。
translated by 谷歌翻译
具有对比性学习目标的预训练方法在对话了解任务中表现出了显着的成功。但是,当前的对比学习仅将自调查的对话样本视为正样本,并将所有其他对话样本视为负面样本,即使在语义上相关的对话框中,也会强制执行不同的表示。在本文中,我们提出了一个树木结构化的预培训对话模型Space-2,该模型从有限标记的对话框和大规模的无标记的对话框COLPORA通过半监督的对比度预培训来学习对话框表示。具体而言,我们首先定义一个通用的语义树结构(STS),以统一不同对话框数据集的注释模式,以便可以利用所有标记数据中存储的丰富结构信息。然后,我们提出了一个新颖的多视图分数功能,以增加共享类似STS的所有可能对话框的相关性,并且在监督的对比预训练期间仅推开其他完全不同的对话框。为了充分利用未标记的对话,还增加了基本的自我监督对比损失,以完善学习的表示。实验表明,我们的方法可以在DialogLue基准测试中实现新的最新结果,该基准由七个数据集和四个流行的对话框组成。为了获得可重复性,我们在https://github.com/alibabaresearch/damo-convai/tree/main/main/space-2上发布代码和数据。
translated by 谷歌翻译
由于预计不断增长的3D视觉应用程序将为用户提供具有成本效益和高质量的体验,因此人们非常强调点云的视觉质量。回顾点云质量评估(PCQA)方法的开发,通常通过使用单模式信息,即从2D投影或3D点云中提取的视觉质量进行评估。 2D投影包含丰富的纹理和语义信息,但高度依赖于观点,而3D点云对几何变形更敏感,并且对观点不变。因此,为了利用点云和投影图像模式的优势,我们提出了一种新型的无引用点云质量评估(NR-PCQA),以多模式方式进行。在具体上,我们将点云分为子模型,以表示局部几何变形,例如点移和下采样。然后,我们将点云渲染为2D图像投影,以进行纹理特征提取。为了实现目标,子模型和投影图像由基于点和基于图像的神经网络编码。最后,使用对称的跨模式注意来融合多模式质量意识的信息。实验结果表明,我们的方法的表现都优于所有最新方法,并且远远超过了先前的NR-PCQA方法,这突出了所提出方法的有效性。
translated by 谷歌翻译
图像文本检索(ITR)在桥接视觉和舌形式方面具有挑战性。对比度学习已被大多数先前的艺术所采用。除了有限的负面图像文本对外,约束学习的能力受到手动加权负对以及对外部知识的不认识的限制。在本文中,我们提出了新型耦合多样性敏感的动量约束学习(编码器),以改善跨模式表示。首先,发明了一种新颖的多样性对比度学习(DCL)体系结构。我们引入了两种模式的动态词典,以扩大图像文本对的比例,并且通过自适应负面对加权实现多样性敏感性。此外,编码器设计了两个分支。一个人从图像/文本中学习实例级的嵌入式,它还基于其嵌入为其输入图像/文本生成伪在线聚类标签。同时,另一个分支学会从常识知识图中查询以形成两种模式的概念级描述符。之后,两个分支都利用DCL来对齐跨模式嵌入空间,而额外的伪聚类标签预测损失则用于促进第二个分支的概念级表示学习。在两个流行的基准测试(即Mscoco和Flicker30k)上进行的广泛实验,验证编码器的表现明显优于最先进的方法。
translated by 谷歌翻译
我们研究联合的上下文线性匪徒,其中$ m $代理相互合作,在中央服务器的帮助下解决全球上下文线性匪徒问题。我们考虑了异步设置,所有代理商都独立工作,一个代理和服务器之间的通信不会触发其他代理的通信。我们提出了一种基于乐观原理的简单算法\ texttt {fedlinucb}。我们证明\ texttt {fedlinucb}的遗憾是由$ \ tilde {o}(d \ sqrt {\ sum_ {m = 1}^m t_m})$界定的,通信复杂性是$ \ tilde {o}(o}(o}(o}(o}(o))dm^2)$,其中$ d $是上下文向量的尺寸,$ t_m $是与环境的交互总数,$ m $ -th代理。据我们所知,这是第一种可证明有效的算法,它允许联合上下文线性匪徒完全异步通信,同时获得与单一代理设置相同的遗憾保证。
translated by 谷歌翻译
深度学习的表现以检索方式实现了出色的图像检索性能。启发式融合本地和全球特征的最新最先进的单阶段模型可以在效率和有效性之间取决于有希望的权衡。但是,我们注意到由于其多尺度推理范式,现有解决方案的效率仍受到限制。在本文中,我们遵循单阶段的艺术,并通过成功摆脱多尺度测试来获得进一步的复杂性效应平衡。为了实现这一目标,我们放弃了广泛使用的卷积网络,从而限制了探索各种视觉模式的局限性,并诉诸完全基于注意力的框架,以通过变形金刚的成功动机,以实现强大的表示学习。除了将变压器应用于全局特征提取外,我们还设计了一个本地分支,该分支由基于窗口的多头注意力和空间注意力组成,以完全利用本地图像模式。此外,我们建议通过交叉意见模块组合分层本地和全球特征,而不是像以前的艺术一样使用启发式融合。借助我们深入的本地和全球建模框架(DALG),广泛的实验结果表明,效率可以显着提高,同时保持艺术状态的竞争成果。
translated by 谷歌翻译
大规模矢量映射对于运输,城市规划,调查和人口普查很重要。我们提出了GraphMapper,这是从卫星图像中提取端到端向量图的统一框架。我们的关键思想是一种新颖的统一表示,称为“原始图”的不同拓扑的形状,这是一组形状原语及其成对关系矩阵。然后,我们将向量形状的预测,正则化和拓扑重构转换为独特的原始图学习问题。具体而言,GraphMapper是一个基于多头注意的全局形状上下文建模的通用原始图形学习网络。开发了一种嵌入式空间排序方法,用于准确的原始关系建模。我们从经验上证明了GraphMapper对两个具有挑战性的映射任务的有效性,即建立足迹正则化和道路网络拓扑重建。我们的模型在公共基准上的两项任务中都优于最先进的方法。所有代码将公开可用。
translated by 谷歌翻译