许多日常活动和心理物理实验涉及在工作记忆中保持多个项目。当物品采用连续值(例如,方向,对比度,长度,响度),它们必须以适当的尺寸的连续结构存储。我们调查如何通过培训经常性网络在神经电路中在神经电路中提出两个先前显示的刺激取向。我们发现两个方向的活动歧管类似于克利福德·托鲁斯。虽然夹层和标准圆环(甜甜圈的表面)是拓扑相当的,但它们具有重要的功能差异。克利福德·托鲁斯平等地对待两种方向,并使它们保持在正交子空间中,如任务所要求的,而标准的圆环没有。我们发现并表征了支持Clifford Torus的连接模式。此外,除了通过持久性活动存储信息的吸引子之外,我们的网络还使用动态代码,其中单位改变调谐以防止新的感官输入覆盖先前存储的输入。我们认为,每当多个输入通过共享连接输入存储器系统时,通常需要这种动态代码。最后,我们将我们的框架应用于人类心理物理学实验,其中受试者报告了两个记忆的方向。通过改变RNN的培训条件,我们测试和支持人类行为是神经噪声的产物的假设,并且依赖于两个取向之间的序数关系的更稳定和行为相关的记忆。这表明RNNS中的合适的归纳偏差对于揭示人脑如何实现工作记忆很重要。这些结果在一起,了解了一类视觉解码任务的神经计算,从人类行为缩小到突触连接。
translated by 谷歌翻译
Transformer variants dominate the state-of-the-art in different natural language processing tasks such as translation, reading comprehension and summarization. Our paper is more directed to use general memory slots added to the inputs and studying the results of adding these slots. This paper is a go on study of general memory slots rule that were added to the input of the proposed model in previous work. We have two main tasks;1) pretraining task using masked language modeling and b) fine tuning task using HotpotQA . This study aims to verify the ability of the proposed model to handle chunks as if they were one chunk comparing with the base model. As baseline we used T5 transformer. We studied the rule of memory slots augmented to each input chunk and studied the model performance without selector. We found that adding memory to input chunks helped the proposed model to overcome the baseline on Masked language modeling task with specific training parameters. Ablation study reveals the ability of using the compressed input chunks with a degradation in performance.
translated by 谷歌翻译
As social media grows faster, harassment becomes more prevalent which leads to considered fake detection a fascinating field among researchers. The graph nature of data with the large number of nodes caused different obstacles including a considerable amount of unrelated features in matrices as high dispersion and imbalance classes in the dataset. To deal with these issues Auto-encoders and a combination of semi-supervised learning and the GAN algorithm which is called SGAN were used. This paper is deploying a smaller number of labels and applying SGAN as a classifier. The result of this test showed that the accuracy had reached 91\% in detecting fake accounts using only 100 labeled samples.
translated by 谷歌翻译
Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io
translated by 谷歌翻译
Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores Online Domain-Incremental Continual Segmentation~(ODICS), a real-world problem that arises in many applications, \eg, autonomous driving. In ODICS, the model is continually presented with batches of densely labeled images from different domains; computation is limited and no information about the task boundaries is available. In autonomous driving, this may correspond to the realistic scenario of training a segmentation model over time on a sequence of cities. We analyze several existing continual learning methods and show that they do not perform well in this setting despite working well in class-incremental segmentation. We propose SimCS, a parameter-free method complementary to existing ones that leverages simulated data as a continual learning regularizer. Extensive experiments show consistent improvements over different types of continual learning methods that use regularizers and even replay.
translated by 谷歌翻译
Distributing machine learning predictors enables the collection of large-scale datasets while leaving sensitive raw data at trustworthy sites. We show that locally training support vector machines (SVMs) and computing their averages leads to a learning technique that is scalable to a large number of users, satisfies differential privacy, and is applicable to non-trivial tasks, such as CIFAR-10. For a large number of participants, communication cost is one of the main challenges. We achieve a low communication cost by requiring only a single invocation of an efficient secure multiparty summation protocol. By relying on state-of-the-art feature extractors (SimCLR), we are able to utilize differentially private convex learners for non-trivial tasks such as CIFAR-10. Our experimental results illustrate that for $1{,}000$ users with $50$ data points each, our scheme outperforms state-of-the-art scalable distributed learning methods (differentially private federated learning, short DP-FL) while requiring around $500$ times fewer communication costs: For CIFAR-10, we achieve a classification accuracy of $79.7\,\%$ for an $\varepsilon = 0.59$ while DP-FL achieves $57.6\,\%$. More generally, we prove learnability properties for the average of such locally trained models: convergence and uniform stability. By only requiring strongly convex, smooth, and Lipschitz-continuous objective functions, locally trained via stochastic gradient descent (SGD), we achieve a strong utility-privacy tradeoff.
translated by 谷歌翻译
在大型数据集上,对视力任务的深度学习模型进行了培训,因为存在一个通用表示,可用于对所有样本进行预测。尽管事实证明,高复杂性模型能够学习此类表示,但对数据的特定子集进行了培训的专家,可以更有效地推断出标签。然而,使用专家的混合物会提出两个新问题,即(i)在提出新的看不见的样本时分配正确的专家。 (ii)找到培训数据的最佳分区,以使专家最依赖于共同特征。在动态路由(DR)中,提出了一个新颖的体系结构,其中每层由一组专家组成,但是在没有解决这两个挑战的情况下,我们证明该模型可以恢复使用相同的专家子集。在我们的方法中,对多元化的动态路由(DIVDR)进行了明确培训,以解决找到数据相关分区并以无监督的方法分配正确的专家的挑战。我们对MS-Coco的城市景观和对象检测以及实例分割进行了几项实验,显示了几个基线的性能的改善。
translated by 谷歌翻译
我们提出了一个新的灵敏度分析模型,该模型结合了Copulas和在未观察到的混杂状态下的因果推断的标准化。我们将新模型称为$ \ rho $ -gnf($ \ rho $ - graphical正常化流),其中$ \ rho {\ in} [ - 1,+1] $是一个有界灵敏度参数,表示后门非 - 由于未观察到的混杂而引起的因果关系,使用研究最丰富且广泛流行的高斯副群建模。具体而言,$ \ rho $ -gnf使我们能够估计和分析前门因果效应或平均因果效应(ACE)作为$ \ rho $的函数。我们将其称为$ \ rho_ {curve} $。 $ \ rho_ {curve} $使我们能够指定无王牌所需的混杂力量。我们将其称为$ \ rho_ {value} $。此外,$ \ rho_ {curve} $还使我们能够为$ \ rho $ values的间隔提供ACE的界限。我们说明了$ \ rho $ -gnf的好处,并通过对我们的经验王牌界限的实验比其他流行的王牌范围更狭窄。
translated by 谷歌翻译
这项工作介绍了模型预测控制(MPC)的公式,该公式适应基于任务的模型的复杂性,同时保持可行性和稳定性保证。现有的MPC实现通常通过缩短预测范围或简化模型来处理计算复杂性,这两者都可能导致不稳定。受到行为经济学,运动计划和生物力学相关方法的启发,我们的方法通过简单模型解决了MPC问题,用于在地平线区域的动力学和约束,而这种模型是可行的,并且不存在该模型的复杂模型。该方法利用计划和执行的交织来迭代识别这些区域,如果它们满足确切的模板/锚关系,可以安全地简化这些区域。我们表明,该方法不会损害系统的稳定性和可行性特性,并在仿真实验中衡量在四足动物上执行敏捷行为的仿真实验中的性能。我们发现,与固定复杂性实现相比,这种自适应方法可以实现更多的敏捷运动,并扩大可执行任务的范围。
translated by 谷歌翻译
作物疾病显着影响农业生产的数量和质量。在精确农业的目标是最大程度地减少甚至避免使用农药的目的,具有深度学习的天气和遥感数据可以在检测作物疾病中发挥关键作用,从而允许对农作物的局部治疗。但是,将天气和图像等异质数据结合在一起仍然是一个热门话题和具有挑战性的任务。变压器体系结构的最新发展显示了从不同领域(例如文本图像)融合数据的可能性。当前的趋势是仅定制一个变压器来创建多模式融合模型。相反,我们提出了一种使用三个变压器实现数据融合的新方法。在本文中,我们首先通过使用ConvlstM模型来插值来解决缺失的卫星图像问题。然后,提出了一种多模式融合体系结构,该体系结构共同学习处理视觉和天气信息。该体系结构是由三个主要组件,一个视觉变压器和两个变压器编码器构建的,可以融合图像和天气方式。所提出的方法的结果有望达到97 \%的总体准确性。
translated by 谷歌翻译