Multilingual end-to-end models have shown great improvement over monolingual systems. With the development of pre-training methods on speech, self-supervised multilingual speech representation learning like XLSR has shown success in improving the performance of multilingual automatic speech recognition (ASR). However, similar to the supervised learning, multilingual pre-training may also suffer from language interference and further affect the application of multilingual system. In this paper, we introduce several techniques for improving self-supervised multilingual pre-training by leveraging auxiliary language information, including the language adversarial training, language embedding and language adaptive training during the pre-training stage. We conduct experiments on a multilingual ASR task consisting of 16 languages. Our experimental results demonstrate 14.3% relative gain over the standard XLSR model, and 19.8% relative gain over the no pre-training multilingual model.
translated by 谷歌翻译
精神分裂症是一种慢性神经精神疾病,会引起大脑内部的不同结构改变。我们假设将深度学习应用于结构性神经影像学数据集可以检测到与疾病相关的改变,并提高分类和诊断准确性。我们使用单一可用的,常规的T1加权MRI扫描测试了这一假设,我们使用标准后处理方法从中提取了3D全脑结构。然后在三个开放数据集上开发,优化和评估了一个深度学习模型,并对精神分裂症患者进行T1加权MRI扫描。我们提出的模型优于基准模型,该模型还使用3D CNN体系结构对结构MR图像进行了训练。我们的模型几乎能够完美地(ROC曲线下的区域= 0.987),将精神分裂症患者与看不见的结构MRI扫描中的健康对照区分开。区域分析将皮质下区域和心室局部作为最预测的大脑区域。皮层结构在人类的认知,情感和社会功能中起关键作用,这些区域的结构异常与精神分裂症有关。我们的发现证实了精神分裂症与皮质下大脑结构的广泛改变有关,皮层结构信息在诊断分类中提供了突出的特征。总之,这些结果进一步证明了深度学习的潜力,以改善精神分裂症的诊断,并从单个标准的T1加权脑MRI中确定其结构性神经影像学特征。
translated by 谷歌翻译
在互动过程中了解人类的意图一直是一个持久的主题,它在人类机器人互动,虚拟现实和监视中都有应用。在这项研究中,我们专注于与大型每日物体的全身相互作用,并旨在预测对人类对象相互作用的顺序观察,以预测对象和人类的未来状态。由于没有这样的数据集专用于与大型每日物体的全身相互作用,因此我们收集了一个大规模的数据集,其中包含数千种用于培训和评估目的的交互。我们还观察到,对象的固有物理属性对于对象运动预测很有用,因此设计一组对象动态描述符以编码此类内部属性。我们将对象动态描述符视为一种新模式,并提出图形神经网络HO-GCN,以将运动数据和动态描述符为预测任务。我们显示了所提出的网络,消耗动态描述符可以实现最先进的预测结果,并帮助网络更好地推广到看不见的对象。我们还证明了预测结果对人类机器人的合作有用。
translated by 谷歌翻译
基于学习的导航系统广泛用于自主应用,例如机器人,无人驾驶车辆和无人机。已经提出了专门的硬件加速器,以实现这种导航任务的高性能和能效。然而,硬件系统中的瞬态和永久性故障正在增加,并且可以灾难性地违反任务安全。同时,传统的基于冗余的保护方法挑战,用于部署资源受限的边缘应用。在本文中,我们通过从RL训练和推理的算法,对算法,故障模型和数据类型进行了实验评估导航系统的恢复性。我们进一步提出了两种有效的故障缓解技术,实现了基于学习的导航系统的2倍成功率和39%的飞行质量改进。
translated by 谷歌翻译
重量衰减通常用于确保具有批归归量的深神经网络的训练实践中的良好概括(BN-DNNS),在该训练中,由于归一化,某些卷积层对于重量重新恢复是不变的。在本文中,我们证明了重量衰减的实际用法仍然存在一些未解决的问题,尽管现有的理论工作在解释BN-DNNS中体重衰减的影响方面。一方面,当非自适应学习率例如使用动量的SGD,即使在初始训练阶段,有效学习率也会继续增加,从而导致许多神经体系结构的过度拟合效果。另一方面,在SGDM和自适应学习率优化器中,例如亚当,体重衰减对概括的影响对超参数非常敏感。因此,找到最佳的重量衰减参数需要广泛的参数搜索。为了解决这些弱点,我们建议使用简单而有效的重量重新缩放(WRS)方案来规范重量规范,以替代体重衰减。 WRS通过将重量标准明确地重新定为单位规范来控制重量规范,从而防止梯度增加,但也确保了足够大的有效学习率以提高概括。在各种计算机视觉应用程序中,包括图像分类,对象检测,语义细分和人群计数,我们与重量衰减,隐含重量重新缩放(重量标准化)和梯度投影(ADAMP)相比,显示了WR的有效性和鲁棒性。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译