根据深层神经网络生成逼真的合成数据的显着能力,我们重新审视了经典的信号到符号屏障。深层和欺骗强调了物理现实与其抽象表示之间的联系,无论是由数字计算机学到的还是生物剂。从广泛适用的抽象概念定义开始,我们表明,尽管具有非常有效的分类器,但无论体重的数量和培训数据的数量如何,标准的馈送架构都无法捕获琐碎的概念。另一方面,结合递归的体系结构可以代表更大的概念类别,但仍然无法从有限数据集中学习它们。我们定性地描述了可以使用(自由能)Lagrangian来衡量信息复杂性的现代体系结构可以“理解”的概念类别。但是,即使已经理解了一个概念,除非通过持续的互动和验证,否则网络也没有将其理解传达给外部代理的方法。然后,我们将物理对象描述为抽象概念,并使用先前的分析表明可以通过有限体系结构对物理对象进行编码。但是,要了解物理概念,传感器必须提供持续令人兴奋的观察结果,以控制数据采集过程是必不可少的(主动感知)。控制的重要性取决于模态,比声学或化学感知更受视觉效果。最后,我们得出的结论是,在有限资源的有限时间内,将物理实体绑定到数字身份是可能的,原则上解决了信号到符号屏障问题,但我们强调了需要进行连续验证的必要性。
translated by 谷歌翻译
我们基于新的有条件共同信息(LOO-CMI)的新量度来得出有关监督学习算法的理论概括界。与其他不利于问题结构的黑框界面相反,在实践中可能很难评估,我们的loo-CMI界限可以轻松计算,并且可以通过与其他概念(例如经典的一对一的交叉验证,优化算法的稳定性和损失景观的几何形状。它既适用于训练算法的输出及其预测。我们从经验上通过评估其在深度学习的情况下评估其预测的概括差距来验证界限的质量。特别是,我们的界限在大规模的图像分类任务上是无效的。
translated by 谷歌翻译
我们介绍了一种计算关于数据集的学习任务的导数的方法。学习任务是从训练设置到验证错误的函数,可以由培训的深神经网络(DNN)表示。 “数据集导数”是一个线性运算符,围绕培训的模型计算,它通知每个训练样本的权重的扰动如何影响验证误差,通常在单独的验证数据集上计算。我们的方法,DIVA(可微分验证)铰接在预先训练的DNN周围的休假交叉验证误差的闭合形式微分表达上。这种表达构成数据集衍生物。 Diva可用于数据集自动策策,例如用错误的注释删除样本,使用其他相关样本增强数据集或重新平衡。更一般地,DIVA可用于优化数据集,以及模型的参数,作为培训过程的一部分,而无需单独的验证数据集,与AutomL的双层优化方法不同。为了说明DIVA的灵活性,我们向样本自动策展任务报告实验,如异常值拒绝,数据集扩展和多模态数据的自动聚合。
translated by 谷歌翻译
我们提出了一个新的框架,在增强的自然语言(TANL)之间的翻译,解决了许多结构化预测语言任务,包括联合实体和关系提取,嵌套命名实体识别,关系分类,语义角色标记,事件提取,COREREFED分辨率和对话状态追踪。通过培训特定于特定于任务的鉴别分类器来说,我们将其作为一种在增强的自然语言之间的翻译任务,而不是通过培训问题,而不是解决问题,而是可以轻松提取任务相关信息。我们的方法可以匹配或优于所有任务的特定于任务特定模型,特别是在联合实体和关系提取(Conll04,Ade,NYT和ACE2005数据集)上实现了新的最先进的结果,与关系分类(偶尔和默示)和语义角色标签(Conll-2005和Conll-2012)。我们在使用相同的架构和超参数的同时为所有任务使用相同的架构和超级参数,甚至在培训单个模型时同时解决所有任务(多任务学习)。最后,我们表明,由于更好地利用标签语义,我们的框架也可以显着提高低资源制度的性能。
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Fruit is a key crop in worldwide agriculture feeding millions of people. The standard supply chain of fruit products involves quality checks to guarantee freshness, taste, and, most of all, safety. An important factor that determines fruit quality is its stage of ripening. This is usually manually classified by experts in the field, which makes it a labor-intensive and error-prone process. Thus, there is an arising need for automation in the process of fruit ripeness classification. Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded. Machine learning and deep learning techniques dominate the top-performing methods. Furthermore, deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features, which are often crop-specific. In this survey, we review the latest methods proposed in the literature to automatize fruit ripeness classification, highlighting the most common feature descriptors they operate on.
translated by 谷歌翻译
Artificial neural networks can learn complex, salient data features to achieve a given task. On the opposite end of the spectrum, mathematically grounded methods such as topological data analysis allow users to design analysis pipelines fully aware of data constraints and symmetries. We introduce a class of persistence-based neural network layers. Persistence-based layers allow the users to easily inject knowledge about symmetries (equivariance) respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
translated by 谷歌翻译
In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems. These are problems that involve multiple reward signals, and where the goal is to learn a policy that maximises the first reward signal, and subject to this constraint also maximises the second reward signal, and so on. We present a family of both action-value and policy gradient algorithms that can be used to solve such problems, and prove that they converge to policies that are lexicographically optimal. We evaluate the scalability and performance of these algorithms empirically, demonstrating their practical applicability. As a more specific application, we show how our algorithms can be used to impose safety constraints on the behaviour of an agent, and compare their performance in this context with that of other constrained reinforcement learning algorithms.
translated by 谷歌翻译
In contextual linear bandits, the reward function is assumed to be a linear combination of an unknown reward vector and a given embedding of context-arm pairs. In practice, the embedding is often learned at the same time as the reward vector, thus leading to an online representation learning problem. Existing approaches to representation learning in contextual bandits are either very generic (e.g., model-selection techniques or algorithms for learning with arbitrary function classes) or specialized to particular structures (e.g., nested features or representations with certain spectral properties). As a result, the understanding of the cost of representation learning in contextual linear bandit is still limited. In this paper, we take a systematic approach to the problem and provide a comprehensive study through an instance-dependent perspective. We show that representation learning is fundamentally more complex than linear bandits (i.e., learning with a given representation). In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set, while we show cases where it can be arbitrarily harder. We complement this result with an extensive discussion of how it relates to existing literature and we illustrate positive instances where representation learning is as complex as learning with a fixed representation and where sub-logarithmic regret is achievable.
translated by 谷歌翻译
Relation extraction (RE) is a sub-discipline of information extraction (IE) which focuses on the prediction of a relational predicate from a natural-language input unit (such as a sentence, a clause, or even a short paragraph consisting of multiple sentences and/or clauses). Together with named-entity recognition (NER) and disambiguation (NED), RE forms the basis for many advanced IE tasks such as knowledge-base (KB) population and verification. In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE by encoding structured information about the sentences' principal units, such as subjects, objects, verbal phrases, and adverbials, into various forms of vectorized (and hence unstructured) representations of the sentences. Our main conjecture is that the decomposition of long and possibly convoluted sentences into multiple smaller clauses via OpenIE even helps to fine-tune context-sensitive language models such as BERT (and its plethora of variants) for RE. Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models compared to existing RE approaches. Our best results reach 92% and 71% of F1 score for KnowledgeNet and FewRel, respectively, proving the effectiveness of our approach on competitive benchmarks.
translated by 谷歌翻译