Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
Proximal Policy Optimization (PPO) is a highly popular policy-based deep reinforcement learning (DRL) approach. However, we observe that the homogeneous exploration process in PPO could cause an unexpected stability issue in the training phase. To address this issue, we propose PPO-UE, a PPO variant equipped with self-adaptive uncertainty-aware explorations (UEs) based on a ratio uncertainty level. The proposed PPO-UE is designed to improve convergence speed and performance with an optimized ratio uncertainty level. Through extensive sensitivity analysis by varying the ratio uncertainty level, our proposed PPO-UE considerably outperforms the baseline PPO in Roboschool continuous control tasks.
translated by 谷歌翻译
Mixed-precision quantization has been widely applied on deep neural networks (DNNs) as it leads to significantly better efficiency-accuracy tradeoffs compared to uniform quantization. Meanwhile, determining the exact precision of each layer remains challenging. Previous attempts on bit-level regularization and pruning-based dynamic precision adjustment during training suffer from noisy gradients and unstable convergence. In this work, we propose Continuous Sparsification Quantization (CSQ), a bit-level training method to search for mixed-precision quantization schemes with improved stability. CSQ stabilizes the bit-level mixed-precision training process with a bi-level gradual continuous sparsification on both the bit values of the quantized weights and the bit selection in determining the quantization precision of each layer. The continuous sparsification scheme enables fully-differentiable training without gradient approximation while achieving an exact quantized model in the end.A budget-aware regularization of total model size enables the dynamic growth and pruning of each layer's precision towards a mixed-precision quantization scheme of the desired size. Extensive experiments show CSQ achieves better efficiency-accuracy tradeoff than previous methods on multiple models and datasets.
translated by 谷歌翻译
The complicated architecture and high training cost of vision transformers urge the exploration of post-training quantization. However, the heavy-tailed distribution of vision transformer activations hinders the effectiveness of previous post-training quantization methods, even with advanced quantizer designs. Instead of tuning the quantizer to better fit the complicated activation distribution, this paper proposes NoisyQuant, a quantizer-agnostic enhancement for the post-training activation quantization performance of vision transformers. We make a surprising theoretical discovery that for a given quantizer, adding a fixed Uniform noisy bias to the values being quantized can significantly reduce the quantization error under provable conditions. Building on the theoretical insight, NoisyQuant achieves the first success on actively altering the heavy-tailed activation distribution with additive noisy bias to fit a given quantizer. Extensive experiments show NoisyQuant largely improves the post-training quantization performance of vision transformer with minimal computation overhead. For instance, on linear uniform 6-bit activation quantization, NoisyQuant improves SOTA top-1 accuracy on ImageNet by up to 1.7%, 1.1% and 0.5% for ViT, DeiT, and Swin Transformer respectively, achieving on-par or even higher performance than previous nonlinear, mixed-precision quantization.
translated by 谷歌翻译
Supervised learning aims to train a classifier under the assumption that training and test data are from the same distribution. To ease the above assumption, researchers have studied a more realistic setting: out-of-distribution (OOD) detection, where test data may come from classes that are unknown during training (i.e., OOD data). Due to the unavailability and diversity of OOD data, good generalization ability is crucial for effective OOD detection algorithms. To study the generalization of OOD detection, in this paper, we investigate the probably approximately correct (PAC) learning theory of OOD detection, which is proposed by researchers as an open problem. First, we find a necessary condition for the learnability of OOD detection. Then, using this condition, we prove several impossibility theorems for the learnability of OOD detection under some scenarios. Although the impossibility theorems are frustrating, we find that some conditions of these impossibility theorems may not hold in some practical scenarios. Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios. Lastly, we also offer theoretical supports for several representative OOD detection works based on our OOD theory.
translated by 谷歌翻译
量化被疯狂地作为模型压缩技术,该技术通过将神经网络中的浮点重量和激活转换为低位整数来获得有效的模型。量化已被证明可以很好地在卷积神经网络和基于变压器的模型上运行。尽管这些模型具有符合性的典型性,但最近的工作表明,基于MLP的模型能够在从计算机视觉,NLP到3D点云等各种任务上取得可比的结果,同时由于并行性和网络简单性,可以实现更高的吞吐量。但是,正如我们在论文中所显示的那样,将量化直接应用于基于MLP的模型将导致明显的准确性降解。基于我们的分析,两个主要问题说明了准确性差距:1)基于MLP的模型中的激活范围可能太大而无法量化,而2)基于MLP的模型中的特定组件对量化很敏感。因此,我们建议1)应用分层以控制激活的量化范围,2)使用有界的激活功能,3)在激活上应用百分位量化,4)使用我们的改进的模块,称为多个令牌混合MLP,5)应用线性态度敏感操作的不对称量化器。我们的Q-MLP模型配备了上述技术,可以使用8位均匀量化(型号30 MB)和78.47%的Imagenet获得79.68%的精度,而4位量化(15 MB)。
translated by 谷歌翻译
从观察数据中恢复基本的定向无环形结构(DAG),由于DAG受限的优化问题的组合性质,因此极具挑战性。最近,通过将DAG约束将DAG的限制定义为平滑的平等性,通常基于邻接矩阵上的多项式,将DAG学习作为连续优化问题。现有方法将非常小的系数放在高阶多项式术语上以进行稳定,因为它们认为由于数字爆炸而导致高阶项上的大系数有害。相反,我们发现,高阶术语上的大系数对DAG学习有益,当邻接矩阵的光谱辐射小时,高阶术语的较大系数可以比小尺寸近似于小的限制。同行。基于此,我们提出了一种具有有效截短的矩阵功率迭代的新型DAG学习方法,以近似于基于几何序列的DAG约束。从经验上讲,我们的DAG学习方法在各种环境中的表现优于先前的最新方法,在结构锤距离上通常以3倍或以上的倍数。
translated by 谷歌翻译
多源域适应(MSDA)学会了预测目标域数据中的标签,在标记来自多个源域的所有数据并且来自目标域的所有数据的设置下。为了解决这个问题,大多数方法都集中在跨域中学习不变表示。但是,他们的成功严重依赖于标签分布在跨域保持不变的假设。为了减轻它,我们提出了一个新的假设,潜在的协变量移位,其中潜在内容变量的边际分布跨域变化,并且给定标签的条件分布在跨域之间保持不变。我们引入了一个潜在样式变量,以补充潜在因果图作为数据和标签生成过程的潜在内容变量。我们表明,尽管潜在样式变量由于潜在空间中的传输性能而无法识别,但在某些温和条件下,可以将潜在内容变量识别为简单缩放。这激发了我们为MSDA提出一种新颖的方法,该方法在潜在内容变量上学习了不变标签的分布,而不是学习不变表示。与基于不变表示的许多最新方法相比,对模拟和真实数据的经验评估证明了该方法的有效性。
translated by 谷歌翻译
因果代表学习揭示了低级观察背后的潜在高级因果变量,这对于一组感兴趣的下游任务具有巨大的潜力。尽管如此,从观察到的数据中确定真正的潜在因果表示是一个巨大的挑战。在这项工作中,我们专注于确定潜在的因果变量。为此,我们分析了潜在空间中的三个固有特性,包括传递性,置换和缩放。我们表明,传递性严重阻碍了潜在因果变量的可识别性,而排列和缩放指导指导了识别潜在因果变量的方向。为了打破传递性,我们假设潜在的潜在因果关系是线性高斯模型,其中高斯噪声的权重,平均值和方差受到额外观察到的变量的调节。在这些假设下,我们从理论上表明,潜在因果变量可以识别为微不足道的置换和缩放。基于这个理论结果,我们提出了一种新型方法,称为结构性因果变异自动编码器,该方法直接学习潜在因果变量,以及从潜在因果变量到观察到的映射。关于合成和实际数据的实验结果证明了可识别的结果以及所提出的学习潜在因果变量的能力。
translated by 谷歌翻译
代码生成旨在从自然语言描述中自动生成代码段。通常,主流代码生成方法依赖大量的配对培训数据,包括自然语言描述和代码。但是,在某些特定领域的情况下,很难为代码生成建立如此大的配对语料库,因为没有直接可用的配对数据,并且需要大量精力来手动编写代码说明来构建高质量的培训数据集。由于培训数据有限,生成模型不能经过良好的训练,并且可能过于拟合,从而使该模型对现实世界的使用不满意。为此,在本文中,我们提出了一种任务增强方法,该方法通过扩展原始的Tranx模型来支持suptoken级代码生成,将域知识通过辅助任务和亚键入tranx模型纳入代码生成模型。为了验证我们提出的方法,我们收集了一个真实的代码生成数据集并在其上进行实验。我们的实验结果表明,亚句级Tranx模型在我们的数据集中优于原始Tranx模型和变压器模型,并且在我们的任务增强方法的帮助下,Subtoken-Tranx的确切匹配精度可显着提高12.75 \%。多个代码类别的模型性能满足了工业系统应用程序的要求。我们提出的方法已由阿里巴巴的\ emph {bizcook}平台采用。据我们所知,这是在工业开发环境中采用的第一个领域代码生成系统。
translated by 谷歌翻译