基于文本描述的高分辨率遥感图像的合成在许多实际应用方案中具有巨大的潜力。尽管深度神经网络在许多重要的遥感任务中取得了巨大的成功,但是从文本描述中生成现实的遥感图像仍然非常困难。为了应对这一挑战,我们提出了一个新颖的文本形象现代霍普菲尔德网络(TXT2IMG-MHN)。 TXT2IMG-MHN的主要思想是在具有现代Hopfield层的文本和图像嵌入方式上进行层次原型学习。 TXT2IMG-MHN并没有直接学习具体但高度多样化的文本图像联合特征表示,而是旨在从文本图像嵌入中学习最具代表性的原型,从而实现一种粗略的学习策略。然后可以利用这些学到的原型来代表文本到图像生成任务中更复杂的语义。为了更好地评估生成图像的现实主义和语义一致性,我们使用对合成图像训练的分类模型对真实遥感数据进行零击分类。尽管它很简单,但我们发现,零弹性分类的总体准确性可以作为评估从文本生成图像的能力的良好指标。基准遥感文本图像数据集上的广泛实验表明,所提出的TXT2IMG-MHN比现有方法可以生成更现实的遥感图像。代码和预培训模型可在线获得(https://github.com/yonghaoxu/txt2img-mhn)。
translated by 谷歌翻译
在终生学习中,代理人在整个生命中都在不重复的一生中学习,就像人类一样,在不断变化的环境中。因此,终身学习带来了许多研究问题,例如连续领域的转移,这导致了非平稳的奖励和环境动态。由于其连续的性质,这些非平稳性很难检测和应对。因此,需要探索策略和学习方法,这些方法能够跟踪稳定的领域变化并适应它们。我们提出反应性探索,以跟踪和反应终生增强学习中持续的域转移,并相应地更新策略。为此,我们进行实验以研究不同的勘探策略。我们从经验上表明,政策阶级家族的代表更适合终身学习,因为它们比Q学习更快地适应了分销的变化。因此,政策梯度方法从反应性探索中获利最大,并在终身学习中显示出良好的结果,并进行了持续的领域变化。我们的代码可在以下网址提供:https://github.com/ml-jku/reactive-ecploration。
translated by 谷歌翻译
我们介绍了SubGD,这是一种新颖的几声学习方法,基于最近的发现,即随机梯度下降更新往往生活在低维参数子空间中。在实验和理论分析中,我们表明模型局限于合适的预定义子空间,可以很好地推广用于几次学习。合适的子空间符合给定任务的三个标准:IT(a)允许通过梯度流量减少训练误差,(b)导致模型良好的模型,并且(c)可以通过随机梯度下降来识别。 SUBGD从不同任务的更新说明的自动相关矩阵的特征组合中标识了这些子空间。明确的是,我们可以识别出低维合适的子空间,用于对动态系统的几次学习,而动态系统具有不同的属性,这些属性由分析系统描述的一个或几个参数描述。这种系统在科学和工程领域的现实应用程序中无处不在。我们在实验中证实了SubGD在三个不同的动态系统问题设置上的优势,在样本效率和性能方面,均超过了流行的几次学习方法。
translated by 谷歌翻译
在部分可观察到的马尔可夫决策过程(POMDP)中,代理通常使用过去的表示来近似基础MDP。我们建议利用冷冻验证的语言变压器(PLT)进行病史表示和压缩,以提高样品效率。为了避免对变压器进行训练,我们引入了Frozenhopfield,该菲尔德自动将观察结果与预处理的令牌嵌入相关联。为了形成这些关联,现代的Hopfield网络存储了这些令牌嵌入,这些嵌入是通过查询获得的查询来检索的,这些嵌入者通过随机但固定的观察结果获得。我们的新方法Helm,启用了Actor-Critic网络体系结构,该架构包含用于历史记录表示的历史模块的审计语言变压器。由于不需要学习过去的代表,因此掌舵比竞争对手要高得多。在Miligrid和Procgen环境上,Helm掌舵取得了新的最新结果。我们的代码可在https://github.com/ml-jku/helm上找到。
translated by 谷歌翻译
在现实世界中,通过弱势政策影响环境可能是昂贵的或非常危险的,因此妨碍了现实世界的加强学习应用。离线强化学习(RL)可以从给定数据集中学习策略,而不与环境进行交互。但是,数据集是脱机RL算法的唯一信息源,并确定学习策略的性能。我们仍然缺乏关于数据集特征如何影响不同离线RL算法的研究。因此,我们对数据集特性如何实现离散动作环境的离线RL算法的性能的全面实证分析。数据集的特点是两个度量:(1)通过轨迹质量(TQ)测量的平均数据集返回和(2)由状态 - 动作覆盖(SACO)测量的覆盖范围。我们发现,禁止政策深度Q网家族的变体需要具有高SACO的数据集来表现良好。将学习策略朝向给定数据集的算法对具有高TQ或SACO的数据集进行了良好。对于具有高TQ的数据集,行为克隆优先级或类似于最好的离线RL算法。
translated by 谷歌翻译
剪辑在零拍传输学习任务上产生了令人印象深刻的结果,并被视为BERT或GPT3等基础模型。具有丰富表示形式的剪辑视觉模型是使用Infonce目标和自然语言监督对特定任务进行微调之前进行预训练的。尽管剪辑在零拍传输学习方面表现出色,但它遭受了解释的问题,也就是说,它的重点是一个或几个功能,同时忽略了其他相关功能。该问题是由于原始多模式数据中未充分提取协方差结构而引起的。我们建议使用现代Hopfield网络来解决解释的问题。他们检索到的嵌入具有富集的协方差结构,该结构源自存储嵌入中特征的共发生。但是,现代的Hopfield网络增加了阻碍学习的Infonce目标的饱和效应。我们建议使用Infoloob目标来减轻这种饱和效果。我们介绍了小说``对比抛弃了一个增压'(Cloob),该小说使用现代的Hopfield网络与Infoloob Opportions一起进行协方差丰富。在实验中,我们将Cloob与概念标题进行预培训后的剪辑和YFCC数据集进行了比较,相对于其在其他数据集上的零拍传输学习性能。 Cloob在所有考虑的架构和数据集中始终在零摄像转移学习上胜过剪辑。
translated by 谷歌翻译
The abundance of data has given machine learning considerable momentum in natural sciences and engineering, though modeling of physical processes is often difficult. A particularly tough problem is the efficient representation of geometric boundaries. Triangularized geometric boundaries are well understood and ubiquitous in engineering applications. However, it is notoriously difficult to integrate them into machine learning approaches due to their heterogeneity with respect to size and orientation. In this work, we introduce an effective theory to model particle-boundary interactions, which leads to our new Boundary Graph Neural Networks (BGNNs) that dynamically modify graph structures to obey boundary conditions. The new BGNNs are tested on complex 3D granular flow processes of hoppers, rotating drums and mixers, which are all standard components of modern industrial machinery but still have complicated geometry. BGNNs are evaluated in terms of computational efficiency as well as prediction accuracy of particle flows and mixing entropies. BGNNs are able to accurately reproduce 3D granular flows within simulation uncertainties over hundreds of thousands of simulation timesteps. Most notably, in our experiments, particles stay within the geometric objects without using handcrafted conditions or restrictions.
translated by 谷歌翻译
强化学习算法在解决稀疏和延迟奖励的复杂分层任务时需要许多样本。对于此类复杂的任务,最近提出的方向舵使用奖励再分配来利用与完成子任务相关的Q功能中的步骤。但是,由于当前的探索策略无法在合理的时间内发现它们,因此通常只有很少有具有高回报的情节作为示范。在这项工作中,我们介绍了Align-rudder,该王牌利用了一个配置文件模型来进行奖励重新分布,该模型是从多个示范序列比对获得的。因此,Align-Rudder有效地采用了奖励再分配,从而大大改善了很少的演示学习。 Align-rudder在复杂的人工任务上的竞争者优于竞争对手,延迟的奖励和几乎没有示威的竞争者。在Minecraft获得Diamond的任务上,Align Rudder能够挖掘钻石,尽管不经常。代码可在https://github.com/ml-jku/align-rudder上找到。 YouTube:https://youtu.be/ho-_8zul-uy
translated by 谷歌翻译
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the 'Fréchet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.
translated by 谷歌翻译
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PRe-LUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
translated by 谷歌翻译