This paper targets unsupervised skeleton-based action representation learning and proposes a new Hierarchical Contrast (HiCo) framework. Different from the existing contrastive-based solutions that typically represent an input skeleton sequence into instance-level features and perform contrast holistically, our proposed HiCo represents the input into multiple-level features and performs contrast in a hierarchical manner. Specifically, given a human skeleton sequence, we represent it into multiple feature vectors of different granularities from both temporal and spatial domains via sequence-to-sequence (S2S) encoders and unified downsampling modules. Besides, the hierarchical contrast is conducted in terms of four levels: instance level, domain level, clip level, and part level. Moreover, HiCo is orthogonal to the S2S encoder, which allows us to flexibly embrace state-of-the-art S2S encoders. Extensive experiments on four datasets, i.e., NTU-60, NTU-120, PKU-MMD I and II, show that HiCo achieves a new state-of-the-art for unsupervised skeleton-based action representation learning in two downstream tasks including action recognition and retrieval, and its learned action representation is of good transferability. Besides, we also show that our framework is effective for semi-supervised skeleton-based action recognition. Our code is available at https://github.com/HuiGuanLab/HiCo.
translated by 谷歌翻译
现有的假音频检测系统通常依靠专家经验来设计声学功能或手动设计网络结构的超参数。但是,人工调整参数可能会对结果产生相对明显的影响。几乎不可能手动设置最佳参数集。因此,本文提出了一种完全自动化的终端伪造音频检测方法。我们首先使用WAV2VEC预训练模型来获得语音的高级表示。此外,对于网络结构,我们使用了名为Light-Darts的可区分体系结构搜索(飞镖)的修改版本。它学习了深厚的语音表示,同时自动学习和优化包括卷积操作和残留块组成的复杂神经结构。 ASVSPOOF 2019 LA数据集的实验结果表明,我们提出的系统达到的错误率(EER)为1.08%,这表现优于最先进的单个系统。
translated by 谷歌翻译
在本文中,我们专注于3D形式抽象和语义分析的两个任务。这与目前的方法形成对比,仅关注3D形状抽象或语义分析。此外,以前的方法难以产生实例级语义结果,其限制了它们的应用。我们提出了一种用于联合估计3D形式抽象和语义分析的新方法。我们的方法首先为3D形状产生许多3D语义候选区域;然后,我们采用这些候选者直接预测语义类别,并使用深卷积神经网络同时细化候选地区的参数。最后,我们设计一种融合预测结果并获得最终语义抽象的算法,该抽象被显示为对标准非最大抑制的改进。实验结果表明,我们的方法可以产生最先进的结果。此外,我们还发现我们的结果可以很容易地应用于实例级语义部分割和形状匹配。
translated by 谷歌翻译
我们提出了一种参数模型,将自由视图图像映射到编码面部形状,表达和外观的矢量空间,即使用神经辐射场,即可变的面部nerf。具体地,MoFanerf将编码的面部形状,表达和外观以及空间坐标和视图方向作为输入,作为输入到MLP,并输出光学逼真图像合成的空间点的辐射。与传统的3D可变模型(3DMM)相比,MoFanerf在直接综合光学逼真的面部细节方面表现出优势,即使是眼睛,嘴巴和胡须也是如此。而且,通过插入输入形状,表达和外观码,可以容易地实现连续的面部。通过引入特定于特定于特定的调制和纹理编码器,我们的模型合成精确的光度测量细节并显示出强的表示能力。我们的模型显示了多种应用的强大能力,包括基于图像的拟合,随机产生,面部索具,面部编辑和新颖的视图合成。实验表明,我们的方法比以前的参数模型实现更高的表示能力,并在几种应用中实现了竞争性能。据我们所知,我们的作品是基于神经辐射场上的第一款,可用于配合,发电和操作。我们的代码和型号在https://github.com/zhuhao-nju/mofanerf中发布。
translated by 谷歌翻译
基于学习的3D形状分割通常被配制为语义标记问题,假设训练形状的所有部分都用给定的一组标签注释。然而,这种假设对于学习细粒度的细分来说是不切实际的。虽然大多数现成的CAD模型是由施工组成的细粒度,但它们通常会错过语义标签并标记那些细粒度的部分非常乏味。我们接近深群体的问题,其中关键的想法是从带有细粒度分割的形状数据集中学习部分前导者,但没有部分标签。给定点采样3D形状,我们通过相似矩阵模拟点的聚类前沿,通过最小化新的低级损失来实现部分分割。为了处理高度密集的采样点集,我们采用了分裂和征服策略。我们将大点分区设置为多个块。每个块使用以类别 - 不可知方式培训的基于深度基于集群的基于网络的部分进行分段。然后,我们会培训图形卷积网络以合并所有块的段以形成最终的分段结果。我们的方法是用细粒细分的具有挑战性的基准进行评估,显示出最先进的性能。
translated by 谷歌翻译
We introduce \textsc{PoliteRewrite} -- a dataset for polite language rewrite which is a novel sentence rewrite task. Compared with previous text style transfer tasks that can be mostly addressed by slight token- or phrase-level edits, polite language rewrite requires deep understanding and extensive sentence-level edits over an offensive and impolite sentence to deliver the same message euphemistically and politely, which is more challenging -- not only for NLP models but also for human annotators to rewrite with effort. To alleviate the human effort for efficient annotation, we first propose a novel annotation paradigm by a collaboration of human annotators and GPT-3.5 to annotate \textsc{PoliteRewrite}. The released dataset has 10K polite sentence rewrites annotated collaboratively by GPT-3.5 and human, which can be used as gold standard for training, validation and test; and 100K high-quality polite sentence rewrites by GPT-3.5 without human review. We wish this work (The dataset (10K+100K) will be released soon) could contribute to the research on more challenging sentence rewrite, and provoke more thought in future on resource annotation paradigm with the help of the large-scaled pretrained models.
translated by 谷歌翻译
We study a multi-factor block model for variable clustering and connect it to the regularized subspace clustering by formulating a distributionally robust version of the nodewise regression. To solve the latter problem, we derive a convex relaxation, provide guidance on selecting the size of the robust region, and hence the regularization weighting parameter, based on the data, and propose an ADMM algorithm for implementation. We validate our method in an extensive simulation study. Finally, we propose and apply a variant of our method to stock return data, obtain interpretable clusters that facilitate portfolio selection and compare its out-of-sample performance with other clustering methods in an empirical study.
translated by 谷歌翻译
Ensemble learning serves as a straightforward way to improve the performance of almost any machine learning algorithm. Existing deep ensemble methods usually naively train many different models and then aggregate their predictions. This is not optimal in our view from two aspects: i) Naively training multiple models adds much more computational burden, especially in the deep learning era; ii) Purely optimizing each base model without considering their interactions limits the diversity of ensemble and performance gains. We tackle these issues by proposing deep negative correlation classification (DNCC), in which the accuracy and diversity trade-off is systematically controlled by decomposing the loss function seamlessly into individual accuracy and the correlation between individual models and the ensemble. DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated. Thanks to the optimized diversities, DNCC works well even when utilizing a shared network backbone, which significantly improves its efficiency when compared with most existing ensemble systems. Extensive experiments on multiple benchmark datasets and network structures demonstrate the superiority of the proposed method.
translated by 谷歌翻译
The input and output of most text generation tasks can be transformed to two sequences of tokens and they can be modeled using sequence-to-sequence learning modeling tools such as Transformers. These models are usually trained by maximizing the likelihood the output text sequence and assumes the input sequence and all gold preceding tokens are given during training, while during inference the model suffers from the exposure bias problem (i.e., it only has access to its previously predicted tokens rather gold tokens during beam search). In this paper, we propose MoCa ({\bf Mo}mentum {\bf Ca}libration) for text generation. MoCa is an online method that dynamically generates slowly evolving (but consistent) samples using a momentum moving average generator with beam search and MoCa learns to align its model scores of these samples with their actual qualities. Experiments on four text generation datasets (i.e., CNN/DailyMail, XSum, SAMSum and Gigaword) show MoCa consistently improves strong pre-trained transformers using vanilla fine-tuning and we achieve the state-of-the-art results on CNN/DailyMail and SAMSum datasets.
translated by 谷歌翻译
Graph neural networks (GNNs) have recently emerged as a promising learning paradigm in learning graph-structured data and have demonstrated wide success across various domains such as recommendation systems, social networks, and electronic design automation (EDA). Like other deep learning (DL) methods, GNNs are being deployed in sophisticated modern hardware systems, as well as dedicated accelerators. However, despite the popularity of GNNs and the recent efforts of bringing GNNs to hardware, the fault tolerance and resilience of GNNs has generally been overlooked. Inspired by the inherent algorithmic resilience of DL methods, this paper conducts, for the first time, a large-scale and empirical study of GNN resilience, aiming to understand the relationship between hardware faults and GNN accuracy. By developing a customized fault injection tool on top of PyTorch, we perform extensive fault injection experiments to various GNN models and application datasets. We observe that the error resilience of GNN models varies by orders of magnitude with respect to different models and application datasets. Further, we explore a low-cost error mitigation mechanism for GNN to enhance its resilience. This GNN resilience study aims to open up new directions and opportunities for future GNN accelerator design and architectural optimization.
translated by 谷歌翻译