Out-of-Domain (OOD) intent detection is important for practical dialog systems. To alleviate the issue of lacking OOD training samples, some works propose synthesizing pseudo OOD samples and directly assigning one-hot OOD labels to these pseudo samples. However, these one-hot labels introduce noises to the training process because some hard pseudo OOD samples may coincide with In-Domain (IND) intents. In this paper, we propose an adaptive soft pseudo labeling (ASoul) method that can estimate soft labels for pseudo OOD samples when training OOD detectors. Semantic connections between pseudo OOD samples and IND intents are captured using an embedding graph. A co-training framework is further introduced to produce resulting soft labels following the smoothness assumption, i.e., close samples are likely to have similar labels. Extensive experiments on three benchmark datasets show that ASoul consistently improves the OOD detection performance and outperforms various competitive baselines.
translated by 谷歌翻译
具有终身学习能力(LL)能力的质量检查模型对于实用的质量检查应用很重要,据报道,基于架构的LL方法是这些模型的有效实现。但是,将以前的方法扩展到质量检查任务是不平凡的,因为它们要么需要在测试阶段访问任务身份,要么不会从看不见的任务中明确对样本进行模拟。在本文中,我们提出了Diana:一种基于动态体系结构的终生质量检查模型,该模型试图通过迅速增强的语言模型来学习一系列QA任务。戴安娜(Diana)使用四种类型的分层组织提示来捕获来自不同粒度的质量检查知识。具体而言,我们专门介绍任务级别的提示来捕获特定任务的知识,以保留高LL性能并维护实例级别的提示,以学习跨不同输入样本共享的知识,以提高模型的概括性能。此外,我们专用于单独的提示来明确建模未看到的任务,并引入一组及时的密钥向量,以促进任务之间的知识共享。广泛的实验表明,戴安娜(Diana)的表现优于最先进的终身质量检查模型,尤其是在处理看不见的任务时。
translated by 谷歌翻译
在本文中,我们提出了用于滚动快门摄像机的概率连续时间视觉惯性频道(VIO)。连续的时轨迹公式自然促进异步高频IMU数据和运动延伸的滚动快门图像的融合。为了防止棘手的计算负载,提出的VIO是滑动窗口和基于密钥帧的。我们建议概率地将控制点边缘化,以保持滑动窗口中恒定的密钥帧数。此外,可以在我们的连续时间VIO中在线校准滚动快门相机的线曝光时间差(线延迟)。为了广泛检查我们的连续时间VIO的性能,对公共可用的WHU-RSVI,TUM-RSVI和Sensetime-RSVI Rolling快门数据集进行了实验。结果表明,提出的连续时间VIO显着优于现有的最新VIO方法。本文的代码库也将通过\ url {https://github.com/april-zju/ctrl-vio}开源。
translated by 谷歌翻译
由于缺乏电感偏见,视觉变压器(VIT)通常被认为比卷积神经网络(CNN)少。因此,最近的工作将卷积作为插件模块,并将其嵌入各种Vit对应物中。在本文中,我们认为卷积内核执行信息聚合以连接所有令牌。但是,如果这种明确的聚合能够以更均匀的方式起作用,则实际上是轻重量VIT的不必要的。受到这一点的启发,我们将Lightvit作为新的轻巧VIT家族,以在不卷积的情况下在纯变压器块上实现更好的准确性效率平衡。具体而言,我们将一个全球但有效的聚合方案引入了VIT的自我注意力和前馈网络(FFN),其中引入了其他可学习的令牌以捕获全球依赖性;在令牌嵌入上施加了双维通道和空间注意力。实验表明,我们的模型在图像分类,对象检测和语义分割任务上取得了重大改进。例如,我们的LightVit-T仅使用0.7G拖鞋的ImageNet上达到78.7%的精度,在GPU上的PVTV2-B0优于8.2%,而GPU的速度快11%。代码可在https://github.com/hunto/lightvit上找到。
translated by 谷歌翻译
由于可靠的3D空间信息,LIDAR传感器广泛用于自动驾驶。然而,LIDAR的数据稀疏,LIDAR的频率低于相机的频率。为了在空间和时间上生成密集点云,我们提出了第一个将来的伪激光框架预测网络。鉴于连续稀疏深度图和RGB图像,我们首先根据动态运动信息粗略地预测未来的密集深度图。为了消除光流量估计的误差,提出了帧间聚合模块,以使具有自适应权重的翘曲深度图熔断。然后,我们使用静态上下文信息优化预测的密集深度图。通过将预测的密集深度图转换为相应的3D点云,可以获得未来的伪激光镜帧。实验结果表明,我们的方法优于流行基准基准的现有解决方案。
translated by 谷歌翻译
我们介绍了一个高分辨率变压器(HRFormer),其学习了密集预测任务的高分辨率表示,与产生低分辨率表示的原始视觉变压器,具有高存储器和计算成本。我们利用在高分辨率卷积网络(HRNET)中引入的多分辨率并行设计,以及本地窗口自我关注,用于通过小型非重叠图像窗口进行自我关注,以提高存储器和计算效率。此外,我们将卷积介绍到FFN中以在断开连接的图像窗口中交换信息。我们展示了高分辨率变压器对人类姿态估计和语义分割任务的有效性,例如,HRFormer在Coco姿势估算中以$ 50 \%$ 50 + 50美元和30 \%$更少的拖鞋。代码可用:https://github.com/hrnet/hRFormer。
translated by 谷歌翻译
我们提出了自适应培训 - 一种统一的培训算法,通过模型预测动态校准并增强训练过程,而不会产生额外的计算成本 - 以推进深度神经网络的监督和自我监督的学习。我们分析了培训数据的深网络培训动态,例如随机噪声和对抗例。我们的分析表明,模型预测能够在数据中放大有用的基础信息,即使在没有任何标签信息的情况下,这种现象也会发生,突出显示模型预测可能会产生培训过程:自适应培训改善了深网络的概括在噪音下,增强自我监督的代表学习。分析还阐明了解深度学习,例如,在经验风险最小化和最新的自我监督学习算法的折叠问题中对最近发现的双重现象的潜在解释。在CIFAR,STL和Imagenet数据集上的实验验证了我们在三种应用中的方法的有效性:用标签噪声,选择性分类和线性评估进行分类。为了促进未来的研究,该代码已在HTTPS://github.com/layneh/Self-Aveptive-训练中公开提供。
translated by 谷歌翻译
This paper revisits a fundamental problem in statistical inference from a non-asymptotic theoretical viewpoint $\unicode{x2013}$ the construction of confidence sets. We establish a finite-sample bound for the estimator, characterizing its asymptotic behavior in a non-asymptotic fashion. An important feature of our bound is that its dimension dependency is captured by the effective dimension $\unicode{x2013}$ the trace of the limiting sandwich covariance $\unicode{x2013}$ which can be much smaller than the parameter dimension in some regimes. We then illustrate how the bound can be used to obtain a confidence set whose shape is adapted to the optimization landscape induced by the loss function. Unlike previous works that rely heavily on the strong convexity of the loss function, we only assume the Hessian is lower bounded at optimum and allow it to gradually becomes degenerate. This property is formalized by the notion of generalized self-concordance which originated from convex optimization. Moreover, we demonstrate how the effective dimension can be estimated from data and characterize its estimation accuracy. We apply our results to maximum likelihood estimation with generalized linear models, score matching with exponential families, and hypothesis testing with Rao's score test.
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译