与关节位置相比,在皮肤多人线性模型(SMPL)基于多视图图像的基于皮肤的多人线性模型(SMPL)的人网格重建中,关节旋转和形状估计的准确性相对较少。该领域的工作大致分为两类。第一种方法执行关节估计,然后通过将SMPL拟合到最终的接头来产生SMPL参数。第二种方法通过基于卷积神经网络(CNN)模型直接从输入图像中回归SMPL参数。但是,这些方法缺乏解决联合旋转和形状重建和网络学习难度的歧义的信息。为了解决上述问题,我们提出了一种两阶段的方法。提出的方法首先通过从输入图像中的基于CNN的模型估算网格顶点的坐标,并通过将SMPL模型拟合到估计的顶点来获取SMPL参数。估计的网格顶点提供了足够的信息来确定关节旋转和形状,并且比SMPL参数更容易学习。根据使用Human3.6M和MPI-INF-3DHP数据集的实验,所提出的方法在关节旋转和形状估计方面显着优于先前的作品,并在关节位置估计方面实现了竞争性能。
translated by 谷歌翻译
尽管近年来3D人姿势和形状估计方法的性能显着提高,但是现有方法通常在相机或以人为本的坐标系中定义的3D姿势。这使得难以估计使用移动相机捕获的视频的世界坐标系中的人的纯姿势和运动。为了解决这个问题,本文提出了一种用于预测世界坐标系中定义的3D人姿势和网格的相机运动不可知论方法。所提出的方法的核心思想是估计不变选择坐标系的两个相邻的全局姿势(即全局运动)之间的差异,而不是耦合到相机运动的全局姿势。为此,我们提出了一种基于双向门控复发单元(GRUS)的网络,该单元从局部姿势序列预测全局运动序列,由称为全局运动回归(GMR)的关节相对旋转组成。我们使用3DPW和合成数据集,该数据集在移动相机环境中构建,进行评估。我们进行广泛的实验,并经验证明了提出的方法的有效性。代码和数据集可在https://github.com/seonghyunkim1212/gmr获得
translated by 谷歌翻译
Weakly-supervised temporal action localization (WTAL) learns to detect and classify action instances with only category labels. Most methods widely adopt the off-the-shelf Classification-Based Pre-training (CBP) to generate video features for action localization. However, the different optimization objectives between classification and localization, make temporally localized results suffer from the serious incomplete issue. To tackle this issue without additional annotations, this paper considers to distill free action knowledge from Vision-Language Pre-training (VLP), since we surprisingly observe that the localization results of vanilla VLP have an over-complete issue, which is just complementary to the CBP results. To fuse such complementarity, we propose a novel distillation-collaboration framework with two branches acting as CBP and VLP respectively. The framework is optimized through a dual-branch alternate training strategy. Specifically, during the B step, we distill the confident background pseudo-labels from the CBP branch; while during the F step, the confident foreground pseudo-labels are distilled from the VLP branch. And as a result, the dual-branch complementarity is effectively fused to promote a strong alliance. Extensive experiments and ablation studies on THUMOS14 and ActivityNet1.2 reveal that our method significantly outperforms state-of-the-art methods.
translated by 谷歌翻译
We introduce the MAsked Generative VIdeo Transformer, MAGVIT, to tackle various video synthesis tasks with a single model. We introduce a 3D tokenizer to quantize a video into spatial-temporal visual tokens and propose an embedding method for masked video token modeling to facilitate multi-task learning. We conduct extensive experiments to demonstrate the quality, efficiency, and flexibility of MAGVIT. Our experiments show that (i) MAGVIT performs favorably against state-of-the-art approaches and establishes the best-published FVD on three video generation benchmarks, including the challenging Kinetics-600. (ii) MAGVIT outperforms existing methods in inference time by two orders of magnitude against diffusion models and by 60x against autoregressive models. (iii) A single MAGVIT model supports ten diverse generation tasks and generalizes across videos from different visual domains. The source code and trained models will be released to the public at https://magvit.cs.cmu.edu.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.
translated by 谷歌翻译
开放设定的半监督学习(OSSL)引起了人们日益增长的兴趣,该学习调查了一个更实用的情况,在该情况下,仅在未标记的数据中包含了分布式(OOD)样本。现有的OSSL方法(例如OpenMatch)学习一个OOD检测器以识别离群值,该检测器通常会更新所有模态参数(即完整的微调),以从标记的数据传播类信息到未标记的数据。当前,已经开发了及时的学习来弥合预训练和微调之间的差距,这在几个下游任务中显示出较高的计算效率。在本文中,我们提出了一个迅速驱动的有效OSSL框架,称为OpenPrompt,该框架可以将类别的类信息传播到标记到未标记数据的类信息,只有少数可训练的参数。我们提出了一种迅速驱动的关节空间学习机制来检测OOD数据,通过在未标记的数据中最大化ID和OOD样本之间的分布差距,从而使我们的方法可以以新的方式检测到异常值。三个公共数据集的实验结果表明,OpenPrompt优于不到1%可训练参数的最先进方法。更重要的是,OpenPrompt在CIFAR10上完全监督模型的AUROC检测方面取得了4%的改善。
translated by 谷歌翻译
受到正规彩票假说(RLTH)的启发,该假说假设在密集网络中存在平稳(非二进制)子网,以实现密集网络的竞争性能,我们提出了几个播放类增量学习(FSCIL)方法。 to as \ emph {soft-subnetworks(softnet)}。我们的目标是逐步学习一系列会议,每个会议在每个课程中只包含一些培训实例,同时保留了先前学到的知识。软网络在基本训练会议上共同学习模型权重和自适应非二进制软面具,每个面具由主要和次要子网组成;前者的目的是最大程度地减少训练期间的灾难性遗忘,而后者的目的是避免在每个新培训课程中过度拟合一些样本。我们提供了全面的经验验证,表明我们的软网络通过超越基准数据集的最先进基准的性能来有效地解决了几个弹药的学习问题。
translated by 谷歌翻译
我们提出了一个混合框架Oppinn:物理知识的神经网络(PINN),其中运算符学习以近似于Fokker-Planck-Landau(FPL)方程的解决方案。 Oppinn框架分为两个步骤:步骤1和步骤2。在步骤1期间对操作员替代模型进行训练后,PINN可以使用预训练的替代模型在步骤2期间有效地近似于FPL方程。操作员替代模型可大大降低计算成本,并通过近似FPL方程中的复杂Landau碰撞积分来提高PINN。操作员替代模型也可以与传统的数值方案结合使用。当速度模式变大时,它在计算时间中提供了高效率。使用Oppinn框架,我们在各种类型的初始条件下为FPL方程提供了神经网络解决方案,并在两个和三个维度中提供相互作用模型。此外,基于FPL方程的理论属性,我们表明,随着预定义的损耗函数的降低,近似的神经网络解决方案会收敛到FPL方程的先验经典解。
translated by 谷歌翻译
混合方案表明混合一对样品以创造增强的训练样本,并最近获得了相当大的关注,以提高神经网络的普遍性。混合的简单和广泛使用的扩展是与区域辍学方法相结合:从样品中除去随机贴片并用另一个样品的特征替换。尽管它们的简单性和有效性,但这些方法易于由于它们的随机性而产生有害样品。为了解决这个问题,最近提出了“最大显着性”策略:只选择最具信息性的功能以防止这种现象。然而,他们现在缺乏样品多样化,因为它们总是确定具有最大显着性的区域,将偏置注入增强数据。在本文中,我们展示了一种新颖,简单的混合变体,捕获了两个世界的最佳变化。我们的想法是两倍。通过将特征的随机抽查和“将它们嫁接到另一个样本”,我们的方法有效地产生了多样化但有意义的样本。其第二种成分是通过以显着校准的方式混合标签来生产接枝样品的标签,这整流了随机抽样程序引入的监督误导。我们在CiFar,微小想象成和Imagenet数据集下的实验表明,我们的方案不仅在分类准确性方面优于当前的最先进的增强策略,但在数据损坏等压力条件下也是优越的对象遮挡。
translated by 谷歌翻译