扩散模型是一类新的生成模型,在依靠固体概率原理的同时,标志着高质量图像生成中的里程碑。这使他们成为神经图像压缩的有前途的候选模型。本文概述了基于有条件扩散模型的端到端优化框架。除了扩散过程固有的潜在变量外,该模型还引入了额外的“ content”潜在变量,以调节降解过程。解码后,扩散过程有条件地生成/重建祖先采样。我们的实验表明,这种方法的表现优于表现最佳的传统图像编解码器之一(BPG)和一个在两个压缩基准上的神经编解码器,我们将重点放在速率感知权衡方面。定性地,我们的方法显示出比经典方法更少的减压工件。
translated by 谷歌翻译
差异化私有(DP)数据发布是一种有前途的技术,可以在不损害数据主体的隐私而传播数据。但是,大多数先前的工作都集中在单一方拥有所有数据的方案上。在本文中,我们专注于多方设置,其中不同的利益相关者拥有属于同一数据主体的属性集合。在线性回归的上下文中,允许各方在完全数据上训练模型,而无需推断个人的私人属性或身份,我们首先直接应用高斯机制并表明其具有小的特征值问题。我们进一步提出了我们的新方法,并证明其渐近地收敛到随着数据集大小增加的最佳(非私有)解决方案。我们通过对人工和现实世界数据集的实验来证实理论结果。
translated by 谷歌翻译
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic next-frame prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against five baselines on four datasets involving natural and simulation-based videos. We find significant improvements in terms of perceptual quality for all datasets. Furthermore, by introducing a scalable version of the Continuous Ranked Probability Score (CRPS) applicable to video, we show that our model also outperforms existing approaches in their probabilistic frame forecasting ability.
translated by 谷歌翻译
在复杂环境中开发针对四足动物的强大视觉引导控制器,具有各种障碍,动力环境和不平坦的地形,这是非常具有挑战性的。尽管增强学习(RL)为敏捷的运动技能提供了有希望的范式,并在模拟中提供了视觉投入,但在现实世界中将RL政策部署仍然非常具有挑战性。我们的关键见解是,除了域间隙的差异,模拟和现实世界之间的视觉外观外,控制管道的延迟也是困难的主要原因。在本文中,我们建议在训练RL代理时解决此问题。具体而言,我们通过使用过去的观测值模拟真实硬件的延迟,并以随机时期进行采样,以进行本体感受和视觉。我们在没有任何预定义的控制器或参考运动的情况下训练RL策略在物理模拟器中以端到端的控制,并将其直接部署在野外运行的真实A1四倍的机器人上。我们在具有复杂地形和障碍的不同室外环境中评估我们的方法。我们证明机器人可以高速操纵,避免障碍物,并在基准方面显示出显着改善。我们的带有视频的项目页面位于https://mehooz.github.io/mmdr-wild/。
translated by 谷歌翻译
虽然对理解计算机视觉中的手对象交互进行了重大进展,但机器人执行复杂的灵巧操纵仍然非常具有挑战性。在本文中,我们提出了一种新的平台和管道DEXMV(来自视频的Dexerous操纵)以进行模仿学习。我们设计了一个平台:(i)具有多指机器人手和(ii)计算机视觉系统的复杂灵巧操纵任务的仿真系统,以记录进行相同任务的人类手的大规模示范。在我们的小说管道中,我们从视频中提取3D手和对象姿势,并提出了一种新颖的演示翻译方法,将人类运动转换为机器人示范。然后,我们将多个仿制学习算法与演示进行应用。我们表明,示威活动确实可以通过大幅度提高机器人学习,并解决独自增强学习无法解决的复杂任务。具有视频的项目页面:https://yzqin.github.io/dexmv
translated by 谷歌翻译
越来越多的自然语言处理研究(NLP)和自然语言理解(NLU)正在研究从大语言模型的嵌入一词中学习或编码的人类知识。这是了解哪些知识语言模型捕获的一步,类似于人类对语言和交流的理解。在这里,我们调查了单词(即价,唤醒,主导地位)的影响以及如何在大型神经网络中预先训练的单词嵌入中编码。我们将人类标记的数据集用作地面真理,并对四种单词嵌入方式进行了各种相关和分类测试。嵌入在静态或上下文化方面有所不同,以及在训练和微调阶段优先考虑特定信息的程度。我们的分析表明,嵌入Vanilla Bert模型的单词并未明显编码英语单词的影响信息。只有在与情绪相关的任务上进行微调或包含来自情感丰富的环境的额外上下文化信息时,只有在bert模型进行微调时,相应的嵌入方式可以编码更相关的影响信息。
translated by 谷歌翻译
Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: (1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; (2) the models suffer from information loss because they only consider single-scale feature representations; and (3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CASTformer, a novel type of adversarial transformers, for 2D medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model's inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CASTformer a strong starting point for downstream medical image analysis tasks.
translated by 谷歌翻译
强化学习的许多实际应用需要代理商从稀疏和延迟奖励中学习。它挑战代理人将其行动归因于未来的成果。在本文中,我们考虑了轨迹反馈的开缩钢筋学习的问题。它指的是奖励信号的极端延迟,其中代理只能在每个轨迹的末尾获得一个奖励信号。这个问题设置的流行范例是使用设计的辅助密集奖励功能来学习,即代理奖励,而不是稀疏的环境信号。基于这一框架,本文提出了一种新颖的奖励再分配算法,随机返回分解(RRD),了解集兴奋学学习的代理奖励功能。我们通过Monte-Carlo抽样建立了代理问题,将基于比分的最小二乘奖励重新分配缩放到长地平问题。我们通过与文献中的现有方法进行连接,分析了我们的代理损失功能,这示出了我们方法的算法属性。在实验中,我们广泛地评估了具有集的基准任务的各种基准任务,并证明了对基线算法的大量改进。
translated by 谷歌翻译
时空视频超分辨率(STVSR)旨在从相应的低帧速率,低分辨率视频序列构建高空时间分辨率视频序列。灵感来自最近的成功,考虑空间时间超级分辨率的空间信息,我们在这项工作中的主要目标是在快速动态事件的视频序列中充分考虑空间和时间相关性。为此,我们提出了一种新颖的单级内存增强图注意网络(Megan),用于时空视频超分辨率。具体地,我们构建新颖的远程存储图聚合(LMGA)模块,以沿着特征映射的信道尺寸动态捕获相关性,并自适应地聚合信道特征以增强特征表示。我们介绍了一个非本地剩余块,其使每个通道明智的功能能够参加全局空间分层特征。此外,我们采用渐进式融合模块通过广泛利用来自多个帧的空间 - 时间相关性来进一步提高表示能力。实验结果表明,我们的方法与定量和视觉上的最先进的方法相比,实现了更好的结果。
translated by 谷歌翻译
Adversarial training is a method for enhancing neural networks to improve the robustness against adversarial examples. Besides the security concerns of potential adversarial examples, adversarial training can also improve the generalization ability of neural networks, train robust neural networks, and provide interpretability for neural networks. In this work, we introduce adversarial training in time series analysis to enhance the neural networks for better generalization ability by taking the finance field as an example. Rethinking existing research on adversarial training, we propose the adaptively scaled adversarial training (ASAT) in time series analysis, by rescaling data at different time slots with adaptive scales. Experimental results show that the proposed ASAT can improve both the generalization ability and the adversarial robustness of neural networks compared to the baselines. Compared to the traditional adversarial training algorithm, ASAT can achieve better generalization ability and similar adversarial robustness.
translated by 谷歌翻译