深度神经网络的高度非线性性质使它们容易受到对抗例子的影响,并且具有不稳定的梯度,从而阻碍了可解释性。但是,解决这些问题的现有方法,例如对抗性训练,是昂贵的,并且通常会牺牲预测的准确性。在这项工作中,我们考虑曲率,这是编码非线性程度的数学数量。使用此功能,我们展示了低曲率的神经网络(LCNN),这些神经网络(LCNN)的曲率比标准模型大大低,同时表现出相似的预测性能,从而导致稳健性和稳定梯度,并且只有略有增加的训练时间。为了实现这一目标,我们最大程度地减少了与数据依赖性的上限在神经网络的曲率上,该曲率分解了其组成层的曲率和斜率方面的总体曲率。为了有效地最大程度地减少这种结合,我们介绍了两个新型的建筑组件:首先,一种称为中心软pplus的非线性性,是SoftPlus非线性的稳定变体,其次是Lipschitz构成的批处理标准化层。我们的实验表明,与标准的高曲率对应物相比,LCNN具有较低的曲率,更稳定的梯度和增加现成的对抗性鲁棒性,而不会影响预测性能。我们的方法易于使用,可以很容易地将其纳入现有的神经网络模型中。
translated by 谷歌翻译
This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image. However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always favorable in id-sensitive ReID tasks. In this paper, we propose to replace traditional data augmentation with a generative adversarial network (GAN) that is targeted to generate augmented views for contrastive learning. A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features. Deviating from previous GAN-based ReID methods that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and id-related features. We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related augmentations. By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.
translated by 谷歌翻译
3D autonomous driving semantic segmentation using deep learning has become, a well-studied subject, providing methods that can reach very high performance. Nonetheless, because of the limited size of the training datasets, these models cannot see every type of object and scenes found in real-world applications. The ability to be reliable in these various unknown environments is called domain generalization. Despite its importance, domain generalization is relatively unexplored in the case of 3D autonomous driving semantic segmentation. To fill this gap, this paper presents the first benchmark for this application by testing state-of-the-art methods and discussing the difficulty of tackling LiDAR domain shifts. We also propose the first method designed to address this domain generalization, which we call 3DLabelProp. This method relies on leveraging the geometry and sequentiality of the LiDAR data to enhance its generalization performances by working on partially accumulated point clouds. It reaches a mIoU of 52.6% on SemanticPOSS while being trained only on SemanticKITTI, making it state-of-the-art method for generalization (+7.4% better than the second best method). The code for this method will be available on Github.
translated by 谷歌翻译
In this paper, hypernetworks are trained to generate behaviors across a range of unseen task conditions, via a novel TD-based training objective and data from a set of near-optimal RL solutions for training tasks. This work relates to meta RL, contextual RL, and transfer learning, with a particular focus on zero-shot performance at test time, enabled by knowledge of the task parameters (also known as context). Our technical approach is based upon viewing each RL algorithm as a mapping from the MDP specifics to the near-optimal value function and policy and seek to approximate it with a hypernetwork that can generate near-optimal value functions and policies, given the parameters of the MDP. We show that, under certain conditions, this mapping can be considered as a supervised learning problem. We empirically evaluate the effectiveness of our method for zero-shot transfer to new reward and transition dynamics on a series of continuous control tasks from DeepMind Control Suite. Our method demonstrates significant improvements over baselines from multitask and meta RL approaches.
translated by 谷歌翻译
Accurate high-altitude wind forecasting is important for air traffic control. And the large volume of data available for this task makes deep neural network-based models a possibility. However, special methods are required because the data is measured only sparsely: along the main aircraft trajectories and arranged sparsely in space, namely along the main air corridors. Several deep learning approaches have been proposed, and in this work, we show that Transformers can fit this data efficiently and are able to extrapolate coherently from a context set. We show this by an extensive comparison of Transformers to numerous existing deep learning-based baselines in the literature. Besides high-altitude wind forecasting, we compare competing models on other dynamical physical systems, namely those modelled by partial differential equations, in particular the Poisson equation and Darcy Flow equation. For these experiments, in the case where the data is arranged non-regularly in space, Transformers outperform all the other evaluated methods. We also compared them in a more standard setup where the data is arranged on a grid and show that the Transformers are competitive with state-of-the-art methods, even though it does not require regular spacing. The code and datasets of the different experiments will be made publicly available at publication time.
translated by 谷歌翻译
语音活动检测(VAD)旨在检测音频信号上的语音段,这对于许多今天的基于语音的应用程序来说是必要的第一步。当前的最新方法着重于训练直接包含声学中包含的神经网络,例如MEL Filter Basks(MFBS)。因此,此类方法需要一个额外的归一化步骤,以适应影响声学的新领域,这可能仅仅是由于说话者,麦克风或环境的变化所致。此外,这个归一化步骤通常是一种具有一定局限性的基本方法,例如高度容易受到新域可用的数据量。在这里,我们利用了众包共同的声音(CV)语料库,以表明基于自我监督学习(SSL)的表示形式可以很好地适应不同的领域,因为它们是通过跨多个领域的语音表达来计算的。 SSL表示也比基于手工制作的表示(MFB)和现成的VAD的系统获得更好的结果,并在跨域设置方面有了显着改善。
translated by 谷歌翻译
大多数自动情绪识别系统利用情绪的时间连续注释,以提供对自发表达的细粒度描述,如现实生活中所观察到的那样。由于情感是相当主观的,因此通常由几个注释者执行的注释,这些注释为给定维度提供痕迹,即描述诸如唤醒或价值之类的维度的时间连续系列。但是,相同表达式的注释在时间或价值之间很少一致,这增加了用于学习情感预测模型的迹线的偏见和延迟。因此,我们提出了一种可以动态补偿注释之间的矛盾的方法,并使用复发性神经网络将痕迹与相应的声学特征同步。进行了几个情绪数据集进行实验评估,其中包括中文,法语,德语和匈牙利参与者,他们在无噪声条件或野外进行远程互动。结果表明,对于唤醒和价值,我们的方法可以显着增加通道间的一致性以及迹线和音频特征之间的相关性。此外,在使用简单的轻量重量模型对这些维度的自动预测中获得了改进,尤其是在无噪声条件下的价值中,并唤醒了在野外捕获的记录。
translated by 谷歌翻译
对于适当的统计估计,数据集中的偏差可能非常有害。为了应对这个问题,已经开发了重要的加权方法,以将任何有偏分的分布与其相应的目标无偏分布相匹配。如今,开创性内核平均匹配(KMM)方法仍然被认为是该研究领域的最新技术。但是,该方法的主要缺点之一是大型数据集的计算负担。基于Huang等人的先前作品。 (2007)和De Mathelin等。 (2021),我们得出了一种新颖的重要性加权算法,该算法通过使用神经网络预测实例权重来扩展到大型数据集。我们在多个公共数据集上显示,在各种样本偏见下,我们提出的方法大大减少了大数据集上的计算时间,同时与其他重要的加权方法相比,保持了相似的样本偏差校正性能。所提出的方法似乎是唯一能够在合理时间内使用多达200万个数据的大型数据集进行相关重新加权的方法。
translated by 谷歌翻译
众所周知,深厚的强化学习者是效率低下的样本,这大大限制了其在现实世界中的应用。最近,已经设计了许多基于模型的方法来解决这个问题,以了解世界模型是最突出的方法之一。但是,尽管与模拟环境的几乎无限互动听起来很吸引人,但世界模型必须在较长时间内准确。在序列建模任务中变形金刚的成功的动机,我们介绍了Iris,这是一种数据效率的代理,它在由离散自动编码器和自动回归变压器组成的世界模型中学习。在Atari 100k基准中,艾里斯(Iris)的平均正常化得分为1.046,而在26场比赛中的10场比赛中,艾里斯(Iris)的平均正常化得分为1.046。我们的方法为无需lookahead搜索的方法设定了新的技术状态,甚至超过了Muzero。为了培养有关变压器和世界模型的未来研究,用于样品有效的增强学习,我们在https://github.com/eloialonso/iris上发布了代码库。
translated by 谷歌翻译
当前的骨架动作表示方法学习的方法通常集中在受约束的场景上,其中在实验室环境中记录了视频和骨骼数据。在处理现实世界视频中估计的骨骼数据时,由于受试者和摄像机观点之间的差异很大,因此此类方法的性能差。为了解决这个问题,我们通过一种新颖的视图自动编码器介绍了自我监视的骨架动作表示学习。通过Leverage在不同的人类表演者之间进行运动重新定位作为借口任务,以便在2D或3D骨架序列的视觉表示之上删除潜在的动作特异性“运动”特征。这种“运动”功能对于骨架几何和相机视图是不变的,并允许通过辅助,跨视图和跨视图动作分类任务。我们进行了一项研究,重点是针对基于骨架的动作识别的转移学习,并在现实世界数据(例如Posetics)上进行自我监督的预训练。我们的结果表明,从VIA中学到的骨架表示足以提高最新动作分类精度,不仅在3D实验室数据集(例如NTU-RGB+D 60和NTU-RGB+D 120)上,而且还在在仅准确估计2D数据的现实数据集中,例如Toyota Smarthome,UAV-Human和Penn Action。
translated by 谷歌翻译