及时调整尝试更新预训练模型中的一些特定任务参数。它的性能与在语言理解和发电任务上的完整参数设置的微调相当。在这项工作中,我们研究了迅速调整神经文本检索器的问题。我们引入参数效率的及时调整,以调整跨内域,跨域和跨主题设置的文本检索。通过广泛的分析,我们表明该策略可以通过基于微调的检索方法来减轻两个问题 - 参数 - 信息和弱推广性。值得注意的是,它可以显着改善检索模型的零零弹性概括。通过仅更新模型参数的0.1%,及时调整策略可以帮助检索模型获得比所有参数更新的传统方法更好的概括性能。最后,为了促进回猎犬的跨主题概括性的研究,我们策划并发布了一个学术检索数据集,其中包含18K查询的87个主题,使其成为迄今为止特定于特定于主题的主题。
translated by 谷歌翻译
循环结束是自动移动系统同时本地化和映射(SLAM)的基本组成部分。在视觉大满贯领域,单词袋(弓)在循环封闭方面取得了巨大的成功。循环搜索的弓特征也可以在随后的6-DOF环校正中使用。但是,对于3D激光雷达的猛击,最新方法可能无法实时识别循环,并且通常无法纠正完整的6-DOF回路姿势。为了解决这一限制,我们呈现了一袋新颖的单词,以实时循环在3D LIDAR大满贯中关闭,称为Bow3D。我们方法的新颖性在于,它不仅有效地识别了重新审视的环路,而且还实时纠正了完整的6型循环姿势。 BOW3D根据3D功能link3D构建单词袋,该链接有效,姿势不变,可用于准确的点对点匹配。我们将我们提出的方法嵌入了3D激光射击系统中,以评估循环闭合性能。我们在公共数据集上测试我们的方法,并将其与其他最先进的算法进行比较。在大多数情况下,BOW3D在F1 MAX和扩展精度分数方面表现出更好的性能,并具有出色的实时性能。值得注意的是,BOW3D平均需要50毫秒才能识别和纠正Kitti 00中的循环(包括4K+ 64射线激光扫描),当在使用Intel Core i7 @2.2 GHz处理器的笔记本上执行时。
translated by 谷歌翻译
特征提取和匹配是许多计算机视觉任务的基本部分,例如2D或3D对象检测,识别和注册。众所周知,2D功能提取和匹配已经取得了巨大的成功。不幸的是,在3D领域,由于描述性和效率低下,目前的方法无法支持3D激光雷达传感器在视觉任务中的广泛应用。为了解决此限制,我们提出了一种新颖的3D特征表示方法:3D激光点云的线性关键点表示,称为link3d。 Link3D的新颖性在于它完全考虑了LiDar Point Cloud的特征(例如稀疏性,场景的复杂性),并用其强大的邻居键盘来表示当前关键点,从而对当前关键点的描述提供了强烈的约束。提出的链接3D已在两个公共数据集(即Kitti,Steven VLP16)上进行了评估,实验结果表明,我们的方法在匹配性能方面的最先进表现都大大优于最先进的方法。更重要的是,Link3D显示出出色的实时性能(基于LIDAR的频率10 Hz)。 Link3D平均仅需32毫秒即可从64射线激光束收集的点云中提取功能,并且仅需大约8毫秒即可匹配两次LIDAR扫描,当时用Intel Core i7 @2.2 GHz处理器执行笔记本。此外,我们的方法可以广泛扩展到各种3D视觉应用。在本文中,我们已将Link3D应用于3D注册,LiDAR ODOMETIRE和放置识别任务,并与最先进的方法相比实现了竞争成果。
translated by 谷歌翻译
自我监督的学习表明它有可能在没有人为注释的情况下提取强大的视觉表现。提出各种作品从不同的角度处理自我监督的学习:(1)对比学习方法(例如,MOCO,SIMCLR)利用阳性和阴性样品来引导训练方向; (2)不对称网络方法(例如,BYOL,SIMSIAM)通过引入预测器网络和止动梯度操作来摆脱阴性样本; (3)特征去相关方法(例如,Barlow Twins,ViCREG),而是旨在降低特征尺寸之间的冗余。这些方法在各种动机的设计损失功能中看起来非常不同。最终的准确度数也各不相同,其中不同的网络和技巧在不同的作品中使用。在这项工作中,我们证明这些方法可以统一成相同的形式。我们不是比较他们的损失函数,我们通过梯度分析推出统一的公式。此外,我们进行公平和详细的实验以比较他们的表现。事实证明,这些方法之间几乎没有差距,并且使用动量编码器是提高性能的关键因素。从这个统一的框架来看,我们提出了一个简单但有效的自我监督学习的简单但有效的渐变形式。它不需要内存银行或预测的网络,但仍然可以实现最先进的性能,并轻松采用其他培训策略。广泛的线性评估实验和许多下游任务也表现出其有效性。代码应释放。
translated by 谷歌翻译
稀疏贝叶斯学习(SBL)构建了一个极其稀疏的概率模型,具有非常竞争力的泛化。但是,SBL需要将大型协方差矩阵与复杂性O(m ^ 3)(m:特征大小)反转,以更新正则化引脚,使得难以进行实际使用。 SBL中有三个问题:1)反转协方差矩阵可能在某些情况下获得奇异溶液,从而从收敛中阻碍SBL; 2)对高维特征空间或大数据尺寸的问题的可扩展性差; 3)SBL容易受到大规模数据的内存溢出。本文通过新提出的对角QuAsi-Newton(DQN)方法来解决DQN-SBL的新提出的对准Quasi-Newton(DQN)方法,其中忽略了大协方差矩阵的反转,使得复杂性和存储器存储减少到O(M)。使用不同大小的各种基准数据集,在非线性分类器和线性特征选择上进行彻底评估DQN-SBL。实验结果验证DQN-SBL是否通过非常稀疏的模型接收竞争泛化,并符合大规模问题。
translated by 谷歌翻译
信息技术的进步导致了非常大的数据集,通常保存在不同的存储中心。必须适于现有的统计方法来克服所产生的计算障碍,同时保持统计有效性和效率。分裂和征服方法已应用于许多领域,包括分位式流程,回归分析,主偶数和指数家庭。我们研究了有限高斯混合的分布式学习的分裂和征服方法。我们建议减少策略并开发一种有效的MM算法。新估计器显示在某些一般条件下保持一致并保留根 - N一致性。基于模拟和现实世界数据的实验表明,如果后者是可行的,所提出的分离和征管方法具有基于完整数据集的全球估计的统计性能。如果模型假设与真实数据不匹配,甚至可以略高于全局估算器。它还具有比某些现有方法更好的统计和计算性能。
translated by 谷歌翻译
高斯混合还原(GMR)是通过较低订单近似高阶高斯混合物的问题。它广泛用于隐藏马尔可夫模型中的密度估计,递归跟踪和信念传播。在这项工作中,我们表明GMR可以作为优化问题,最小化两个混合物之间的复合输送分流(CTD)。优化问题可以通过易于实现的大多数 - 最小化(MM)算法来解决。我们表明MM算法在一般条件下收敛。 GMR的一种流行的计算有效方法是基于聚类的迭代算法。然而,这些算法缺乏理论保证它们是否在他们何时收敛或获得一些最佳目标。我们表明,现有的基于聚类的算法是我们MM算法的特殊情况,因此可以建立其理论属性。我们进一步示出了通过在CTD中选择各种成本函数,可以进一步提高基于聚类的算法的性能。进行数值实验以说明我们所提出的延伸的有效性。
translated by 谷歌翻译
Bird's-Eye-View (BEV) 3D Object Detection is a crucial multi-view technique for autonomous driving systems. Recently, plenty of works are proposed, following a similar paradigm consisting of three essential components, i.e., camera feature extraction, BEV feature construction, and task heads. Among the three components, BEV feature construction is BEV-specific compared with 2D tasks. Existing methods aggregate the multi-view camera features to the flattened grid in order to construct the BEV feature. However, flattening the BEV space along the height dimension fails to emphasize the informative features of different heights. For example, the barrier is located at a low height while the truck is located at a high height. In this paper, we propose a novel method named BEV Slice Attention Network (BEV-SAN) for exploiting the intrinsic characteristics of different heights. Instead of flattening the BEV space, we first sample along the height dimension to build the global and local BEV slices. Then, the features of BEV slices are aggregated from the camera features and merged by the attention mechanism. Finally, we fuse the merged local and global BEV features by a transformer to generate the final feature map for task heads. The purpose of local BEV slices is to emphasize informative heights. In order to find them, we further propose a LiDAR-guided sampling strategy to leverage the statistical distribution of LiDAR to determine the heights of local slices. Compared with uniform sampling, LiDAR-guided sampling can determine more informative heights. We conduct detailed experiments to demonstrate the effectiveness of BEV-SAN. Code will be released.
translated by 谷歌翻译
Time series forecasting is a long-standing challenge due to the real-world information is in various scenario (e.g., energy, weather, traffic, economics, earthquake warning). However some mainstream forecasting model forecasting result is derailed dramatically from ground truth. We believe it's the reason that model's lacking ability of capturing frequency information which richly contains in real world datasets. At present, the mainstream frequency information extraction methods are Fourier transform(FT) based. However, use of FT is problematic due to Gibbs phenomenon. If the values on both sides of sequences differ significantly, oscillatory approximations are observed around both sides and high frequency noise will be introduced. Therefore We propose a novel frequency enhanced channel attention that adaptively modelling frequency interdependencies between channels based on Discrete Cosine Transform which would intrinsically avoid high frequency noise caused by problematic periodity during Fourier Transform, which is defined as Gibbs Phenomenon. We show that this network generalize extremely effectively across six real-world datasets and achieve state-of-the-art performance, we further demonstrate that frequency enhanced channel attention mechanism module can be flexibly applied to different networks. This module can improve the prediction ability of existing mainstream networks, which reduces 35.99% MSE on LSTM, 10.01% on Reformer, 8.71% on Informer, 8.29% on Autoformer, 8.06% on Transformer, etc., at a slight computational cost ,with just a few line of code. Our codes and data are available at https://github.com/Zero-coder/FECAM.
translated by 谷歌翻译
Open Information Extraction (OIE) methods extract a large number of OIE triples (noun phrase, relation phrase, noun phrase) from text, which compose large Open Knowledge Bases (OKBs). However, noun phrases (NPs) and relation phrases (RPs) in OKBs are not canonicalized and often appear in different paraphrased textual variants, which leads to redundant and ambiguous facts. To address this problem, there are two related tasks: OKB canonicalization (i.e., convert NPs and RPs to canonicalized form) and OKB linking (i.e., link NPs and RPs with their corresponding entities and relations in a curated Knowledge Base (e.g., DBPedia). These two tasks are tightly coupled, and one task can benefit significantly from the other. However, they have been studied in isolation so far. In this paper, we explore the task of joint OKB canonicalization and linking for the first time, and propose a novel framework JOCL based on factor graph model to make them reinforce each other. JOCL is flexible enough to combine different signals from both tasks, and able to extend to fit any new signals. A thorough experimental study over two large scale OIE triple data sets shows that our framework outperforms all the baseline methods for the task of OKB canonicalization (OKB linking) in terms of average F1 (accuracy).
translated by 谷歌翻译