交通预测在智能运输系统中起着不可或缺的作用,使每日旅行更方便和更安全。然而,时空相关的动态演化使得准确的流量预测非常困难。现有工作主要采用图形神经NetWroks(GNNS)和深度时间序列模型(例如,复发性神经网络),以捕获动态交通系统中的复杂时空模式。对于空间模式,GNN难以在道路网络中提取全局空间信息,即远程传感器信息。虽然我们可以使用自我关注来提取全球空间信息,如前面的工作中,它也伴随着巨大的资源消耗。对于时间模式,交通数据不仅易于识别每日和每周趋势,而且难以识别由事故引起的短期噪音(例如,汽车事故和雷暴)。现有交通模型难以在时间序列中区分复杂的时间模式,因此难以实现准确的时间依赖。为了解决上述问题,我们提出了一种新颖的噪声感知高效时空变压器架构,用于准确的交通预测,名为StFormer。 Stformer由两个组件组成,这是噪声感知的时间自我关注(NATSA)和基于图形的稀疏空间自我关注(GBS3A)。 NATSA将高频分量和低频分量与时间序列分开以消除噪声并分别通过学习滤波器和时间自我关注捕获稳定的时间依赖性。 GBS3A以基于图形的稀疏查询替换vanilla自我关注的完整查询,以减少时间和内存使用情况。四个现实世界交通数据集的实验表明,履带器优于较低的计算成本的最先进的基线。
translated by 谷歌翻译
交通预测在智能交通系统中很重要,有利于交通安全,但由于现实世界交通系统中的复杂和动态的时空依赖性,这是非常具有挑战性的。先前的方法使用预定义或学习的静态图来提取空间相关性。但是,基于静态图形的方法无法挖掘交通网络的演变。研究人员随后为每次切片生成动态图形以反映空间相关性的变化,但它们遵循独立建模的时空依赖性的范例,忽略了串行空间影响。在本文中,我们提出了一种新的基于跨时动态图形的深度学习模型,名为CDGNet,用于交通预测。该模型能够通过利用横行动态图来有效地捕获每个时切片和其历史时片之间的串联空间依赖性。同时,我们设计了稀疏横行动态图的浇注机制,符合现实世界中的稀疏空间相关性。此外,我们提出了一种新颖的编码器解码器架构,用于结合基于交叉时间动态图形的GCN,用于多步行量预测。三个现实世界公共交通数据集的实验结果表明CDGNET优于最先进的基线。我们还提供了一种定性研究来分析我们建筑的有效性。
translated by 谷歌翻译
交通预测是智能交通系统的问题(ITS),并为个人和公共机构是至关重要的。因此,研究高度重视应对准确预报交通系统的复杂的时空相关性。但是,有两个挑战:1)大多数流量预测研究主要集中在造型相邻传感器的相关性,而忽略远程传感器,例如,商务区有类似的时空模式的相关性; 2)使用静态邻接矩阵中曲线图的卷积网络(GCNs)的现有方法不足以反映在交通系统中的动态空间依赖性。此外,它采用自注意所有的传感器模型动态关联细粒度方法忽略道路网络分层信息,并有二次计算复杂性。在本文中,我们提出了一种新动态多图形卷积递归网络(DMGCRN),以解决上述问题,可以同时距离的空间相关性,结构的空间相关性,和所述时间相关性进行建模。那么,只使用基于距离的曲线图来捕获空间信息从节点是接近距离也构建了一个新潜曲线图,其编码的道路之间的相关性的结构来捕获空间信息从节点在结构上相似。此外,我们在不同的时间将每个传感器的邻居到粗粒区域,并且动态地分配不同的权重的每个区域。同时,我们整合动态多图卷积网络到门控重复单元(GRU)来捕获时间依赖性。三个真实世界的交通数据集大量的实验证明,我们提出的算法优于国家的最先进的基线。
translated by 谷歌翻译
由于流量大数据的增加,交通预测逐渐引起了研究人员的注意力。因此,如何在交通数据中挖掘复杂的时空相关性以预测交通状况更准确地成为难题。以前的作品组合图形卷积网络(GCNS)和具有深度序列模型的自我关注机制(例如,复发性神经网络),分别捕获时空相关性,忽略时间和空间的关系。此外,GCNS受到过平滑问题的限制,自我关注受到二次问题的限制,导致GCN缺乏全局代表能力,自我注意力效率低下捕获全球空间依赖性。在本文中,我们提出了一种新颖的交通预测深入学习模型,命名为多语境意识的时空关节线性关注(STJLA),其对时空关节图应用线性关注以捕获所有时空之间的全球依赖性节点有效。更具体地,STJLA利用静态结构上下文和动态语义上下文来提高模型性能。基于Node2VEC和单热编码的静态结构上下文丰富了时空位置信息。此外,基于多头扩散卷积网络的动态空间上下文增强了局部空间感知能力,并且基于GRU的动态时间上下文分别稳定了线性关注的序列位置信息。在两个现实世界交通数据集,英格兰和PEMSD7上的实验表明,我们的Stjla可以获得高达9.83%和3.08%,在最先进的基线上的衡量标准的准确性提高。
translated by 谷歌翻译
促性腺营养蛋白释放激素受体(GNRH1R)是治疗子宫疾病的有前途的治疗靶标。迄今为止,在临床研究中可以使用几个GNRH1R拮抗剂,而不满足多个财产约束。为了填补这一空白,我们旨在开发一个基于学习的框架,以促进有效,有效地发现具有理想特性的新的口服小型分子药物靶向GNRH1R。在目前的工作中,首先通过充分利用已知活性化合物和靶蛋白的结构的信息,首先提出了配体和结构组合模型,即LS-Molgen,首先提出了分子生成的方法,该信息通过其出色的性能证明了这一点。比分别基于配体或结构方法。然后,进行了A中的计算机筛选,包括活性预测,ADMET评估,分子对接和FEP计算,其中约30,000个生成的新型分子被缩小到8,以进行实验合成和验证。体外和体内实验表明,其中三个表现出有效的抑制活性(化合物5 IC50 = 0.856 nm,化合物6 IC50 = 0.901 nm,化合物7 IC50 = 2.54 nm对GNRH1R,并且化合物5在基本PK属性中表现良好例如半衰期,口服生物利用度和PPB等。我们认为,提议的配体和结构组合结合的分子生成模型和整个计算机辅助工作流程可能会扩展到从头开始的类似任务或铅优化的类似任务。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译