An approach to evolutionary ensemble learning for classification is proposed in which boosting is used to construct a stack of programs. Each application of boosting identifies a single champion and a residual dataset, i.e. the training records that thus far were not correctly classified. The next program is only trained against the residual, with the process iterating until some maximum ensemble size or no further residual remains. Training against a residual dataset actively reduces the cost of training. Deploying the ensemble as a stack also means that only one classifier might be necessary to make a prediction, so improving interpretability. Benchmarking studies are conducted to illustrate competitiveness with the prediction accuracy of current state-of-the-art evolutionary ensemble learning algorithms, while providing solutions that are orders of magnitude simpler. Further benchmarking with a high cardinality dataset indicates that the proposed method is also more accurate and efficient than XGBoost.
translated by 谷歌翻译
Planning is an extraordinary ability in which the brain imagines and then enacts evaluated possible futures. Using traditional planning models, computer scientists have attempted to replicate this capacity with some level of success but ultimately face a reoccurring limitation: as the plan grows in steps, the number of different possible futures makes it intractable to determine the right sequence of actions to reach a goal state. Based on prior theoretical work on how the ecology of an animal governs the value of spatial planning, we developed a more efficient biologically-inspired planning algorithm, TLPPO. This algorithm allows us to achieve mouselevel predator evasion performance with orders of magnitude less computation than a widespread algorithm for planning in the situations of partial observability that typify predator-prey interactions. We compared the performance of a real-time agent using TLPPO against the performance of live mice, all tasked with evading a robot predator. We anticipate these results will be helpful to planning algorithm users and developers, as well as to areas of neuroscience where robot-animal interaction can provide a useful approach to studying the basis of complex behaviors.
translated by 谷歌翻译
与人一起工作的协作机器人(配件)必须能够快速学习新技能并适应新的任务配置。从演示中学习(LFD)使柯伯特能够学习并适应不同的使用条件。但是,最先进的LFD方法需要手动调整固有参数,并且很少在没有专家的工业环境中使用。在本文中,介绍了与幼稚用户的工业应用程序开发和实施。我们提出了一种基于概率运动基础的无参数方法,其中所有参数均使用Jensen-Shannon Divergence和贝叶斯优化进行预定。因此,用户不必执行手动参数调整。该方法从用户演示的小数据集中学习动作,并将运动推广到各种情况和条件。我们在两个现场测试中广泛评估了该方法:一个在电梯门维护上工作的方法是一个在其中,其中三名辛德勒工人教授Cobot任务对其工作流程有用。 Cobot最终效果和目标位置之间的错误范围从$ 0 $到$ 1.48 \ pm0.35 $ mm。对于所有测试,没有任何任务失败报告。 Schindler工人完成的问卷突出了该方法的易用性,安全性和重复运动的准确性。我们的代码和记录的轨迹可在线提供以进行复制。
translated by 谷歌翻译
我们介绍了一项对自然语言(NL)推理的人类通知,开放域和逻辑上复杂且多样的数据集,配备了一阶逻辑(fol)注释。对开本由1,435个示例(独特的结论)组成,每个示例与487组前提之一搭配,这些场所作为规则,可用于演绎理由,以理解每个结论的有效性。前提和结论的逻辑正确性是通过其平行注释来确保的,这些注释会自动由我们的FOL推理引擎验证。除了主要的NL推理任务外,对开本中的NL-FOL对自动构成了使用FOL作为逻辑形式的新的NL-FOL翻译数据集。我们对广泛的实验系统地评估了对中型语言模型(BERT,ROBERTA)进行微调的FOL推理能力,并且在大型语言模型(GPT-NEOX,OPT,OPT,GPT-3,Codex)上促成了很少的射击。对于NL-FOL翻译,我们尝试使用GPT-3和Codex。我们的结果表明,公开可用的最强大的大语言模型之一(LLM),GPT-3 Davinci,仅比随机结果略好,而在一部分集的一部分中,该模型尤其不好,并且在预测该模型方面尤其不好。纠正虚假和未知结论的真实价值。我们的数据集和代码可在https://github.com/yale-lily/folio上找到。
translated by 谷歌翻译
神经网络和相关的深度学习方法目前处于用于分类对象的技术的前沿。但是,他们通常需要大量的时间和模型培训数据。他们学到的模型有时很难解释。在本文中,我们推进了FastMAPSVM(用于对复杂对象进行分类的可解释的机器学习框架),这是用于通用分类任务的神经网络的有利替代方法。 FastMAPSVM通过组合FastMap和SVM的互补强度,将支持矢量机(SVM)(SVM)的适用性扩展到具有复杂对象的域。 FastMap是一种有效的线性时间算法,该算法将复杂的对象映射到欧几里得空间中的指向,同时保留它们之间的成对域特异性距离。我们证明了FastMAPSVM在分类地震图的背景下的效率和有效性。我们表明,就精确,回忆和准确性而言,其性能与其他最先进的方法相当。但是,与其他方法相比,FastMAPSVM对模型培训的时间和数据量明显较小。它还提供了对象及其之间的分类边界的明显可视化。我们希望FastMAPSVM可行对于许多其他实际域中的分类任务。
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have proven very effective in image classification and show promise for audio. We use various CNN architectures to classify the soundtracks of a dataset of 70M training videos (5.24 million hours) with 30,871 video-level labels. We examine fully connected Deep Neural Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We investigate varying the size of both training set and label vocabulary, finding that analogs of the CNNs used in image classification do well on our audio classification task, and larger training and label sets help up to a point. A model using embeddings from these classifiers does much better than raw features on the Audio Set [5] Acoustic Event Detection (AED) classification task.
translated by 谷歌翻译