Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. We then propose a new method to improve Mixup based on the novel insight. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across various datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
translated by 谷歌翻译
Inferring accurate posteriors for high-dimensional representations of the brightness of gravitationally-lensed sources is a major challenge, in part due to the difficulties of accurately quantifying the priors. Here, we report the use of a score-based model to encode the prior for the inference of undistorted images of background galaxies. This model is trained on a set of high-resolution images of undistorted galaxies. By adding the likelihood score to the prior score and using a reverse-time stochastic differential equation solver, we obtain samples from the posterior. Our method produces independent posterior samples and models the data almost down to the noise level. We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.
translated by 谷歌翻译
Bayesian causal structure learning aims to learn a posterior distribution over directed acyclic graphs (DAGs), and the mechanisms that define the relationship between parent and child variables. By taking a Bayesian approach, it is possible to reason about the uncertainty of the causal model. The notion of modelling the uncertainty over models is particularly crucial for causal structure learning since the model could be unidentifiable when given only a finite amount of observational data. In this paper, we introduce a novel method to jointly learn the structure and mechanisms of the causal model using Variational Bayes, which we call Variational Bayes-DAG-GFlowNet (VBG). We extend the method of Bayesian causal structure learning using GFlowNets to learn not only the posterior distribution over the structure, but also the parameters of a linear-Gaussian model. Our results on simulated data suggest that VBG is competitive against several baselines in modelling the posterior over DAGs and mechanisms, while offering several advantages over existing methods, including the guarantee to sample acyclic graphs, and the flexibility to generalize to non-linear causal mechanisms.
translated by 谷歌翻译
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications. However, collaboration between these actors is difficult due to the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images of various resolutions, timeseries, weather data) and diversity of tasks (e.g., regression of human activity indicators or detecting forest fires). In this work, we present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled earth observation data in a self-supervised manner. We envision how such a model may facilitate cooperation between members of the community. We show preliminary results on the first step of the roadmap, where we instantiate an architecture that can process a wide variety of geospatial data modalities and demonstrate that it can achieve competitive performance with domain-specific architectures on tasks relating to the U.N.'s Sustainable Development Goals.
translated by 谷歌翻译
Bayesian Inference offers principled tools to tackle many critical problems with modern neural networks such as poor calibration and generalization, and data inefficiency. However, scaling Bayesian inference to large architectures is challenging and requires restrictive approximations. Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference and to estimate uncertainty with deep neural networks. Traditionally, the dropout mask is sampled independently from a fixed distribution. Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference. These methods face two important challenges: (a) the posterior distribution over masks can be highly multi-modal which can be difficult to approximate with standard variational inference and (b) it is not trivial to fully utilize sample-dependent information and correlation among dropout masks to improve posterior estimation. In this work, we propose GFlowOut to address these issues. GFlowOut leverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks. We empirically demonstrate that GFlowOut results in predictive distributions that generalize better to out-of-distribution data, and provide uncertainty estimates which lead to better performance in downstream tasks.
translated by 谷歌翻译
Reinforcement learning (RL) algorithms have achieved notable success in recent years, but still struggle with fundamental issues in long-term credit assignment. It remains difficult to learn in situations where success is contingent upon multiple critical steps that are distant in time from each other and from a sparse reward; as is often the case in real life. Moreover, how RL algorithms assign credit in these difficult situations is typically not coded in a way that can rapidly generalize to new situations. Here, we present an approach using offline contrastive learning, which we call contrastive introspection (ConSpec), that can be added to any existing RL algorithm and addresses both issues. In ConSpec, a contrastive loss is used during offline replay to identify invariances among successful episodes. This takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon than it is to prospectively predict reward at every step taken in the environment. ConSpec stores this knowledge in a collection of prototypes summarizing the intermediate states required for success. During training, arrival at any state that matches these prototypes generates an intrinsic reward that is added to any external rewards. As well, the reward shaping provided by ConSpec can be made to preserve the optimal policy of the underlying RL agent. The prototypes in ConSpec provide two key benefits for credit assignment: (1) They enable rapid identification of all the critical states. (2) They do so in a readily interpretable manner, enabling out of distribution generalization when sensory features are altered. In summary, ConSpec is a modular system that can be added to any existing RL algorithm to improve its long-term credit assignment.
translated by 谷歌翻译
生成流动网络(GFLOWNETS)是一种算法家族,用于训练在非均衡目标密度下离散对象的顺序采样器,并已成功用于各种概率建模任务。现有的Gflownets培训目标是国家本地的,或者是过渡的本地,或者在整个采样轨迹上传播奖励信号。我们认为,这些替代方案代表了梯度偏见变化权衡的相反目的,并提出了一种利用这种权衡以减轻其有害影响的方法。受到强化学习的TD($ \ lambda $)算法的启发,我们介绍了一个subtrajectory Balance或subtb($ \ lambda $),这是一个GFLOWNET培训目标,可以从不同长度的部分动作子序列中学习。我们表明,SubTB($ \ lambda $)会在先前研究和新环境中加速采样器的收敛,并在具有更长的动作序列和比以前的可能性更长的环境中培训Gflownets。我们还对随机梯度动力学进行了比较分析,阐明了GFLOWNET训练中的偏差变化权衡以及亚条件平衡的优势。
translated by 谷歌翻译
可识别表示学习的理论旨在构建通用方法,从低水平的感觉数据中提取高级潜在(因果)因素。大多数现有的作品都集中在可识别的表示学习中,并依赖于对潜在因素(因果)因素的分配假设。但是,实际上,我们通常还可以访问用于表示学习的介入数据。我们如何利用介入数据来帮助识别高级潜在的潜伏期?为此,我们探讨了在这项工作中可识别的代表学习中介入数据的作用。我们研究潜在因果因素在没有介入数据的情况下,在未介入数据的情况下,在最小的分布假设上。我们证明,如果真实的潜在变量通过多项式函数映射到观察到的高维数据,则通过最小化自动装饰器的标准重建损失来表示学习,将确定真正的潜在潜在的潜在潜在转化。如果我们进一步访问了由硬$ $ do $ $干预产生的干预数据,那么我们就可以识别出这些干预潜在的潜在潜在的潜在潜在的潜在潜在的潜在潜在的潜伏期。
translated by 谷歌翻译
加速生物序列设计的能力可以对医疗领域的进度产生重大影响。该问题可以作为一个全球优化问题,在该问题中,该目标是昂贵的黑盒功能,因此我们可以查询大量限制,并限制较少的回合。贝叶斯优化是解决此问题的原则方法。然而,生物序列的天文范围较大的状态空间使所有可能的序列都在不可行。在本文中,我们提出了Metarlbo,在其中我们通过元强化学习训练自回归的生成模型,以提出有希望的序列,以通过贝叶斯优化选择。我们提出了这个问题,因为它是在上一轮中获取的数据的采样子集引起的MDP分布上找到最佳策略的问题。我们的内部实验表明,与现有强大基准相比,对此类合奏的元学习提供了鲁棒性,可抵抗奖励错误指定和实现竞争成果。
translated by 谷歌翻译
有许多用于深层生成建模的框架,每个框架通常都有自己的特定培训算法和推理方法。我们提供了有关现有深层生成模型与GFLOWNET框架之间的连接的简短说明,阐明了它们的重叠特征,并通过Markovian轨迹通过学习镜头来提供统一的观点。这为统一培训和推理算法提供了一种手段,并提供了构建生成模型团聚的途径。
translated by 谷歌翻译