我们展示了深度学习模型,特别是像自然语言的变压器那样的架构,可以在随机生成的数据集上培训,以预测代谢网络的定性和定量特征非常高的准确性。使用标准数学技术,我们创建了可以用于训练我们的模型的大型随机网络的大集(40 00万个元素)。这些训练有素的模型可以在超过99%的情况下预测随机图的网络均衡。它们还可以概括与不同结构的图表,而不是在训练时遇到的图表。最后,他们可以预测一小组已知的生物网络的均衡。我们的方法在实验数据中非常经济,并且仅使用小而浅的深度学习模型,远离机器翻译中常用的大型架构。这种结果为更大利用深入学习模型的方法铺平了与定量系统药理学,系统生物学和合成生物学等重点领域相关的问题。
translated by 谷歌翻译
Previous work has shown that a neural network with the rectified linear unit (ReLU) activation function leads to a convex polyhedral decomposition of the input space. These decompositions can be represented by a dual graph with vertices corresponding to polyhedra and edges corresponding to polyhedra sharing a facet, which is a subgraph of a Hamming graph. This paper illustrates how one can utilize the dual graph to detect and analyze adversarial attacks in the context of digital images. When an image passes through a network containing ReLU nodes, the firing or non-firing at a node can be encoded as a bit ($1$ for ReLU activation, $0$ for ReLU non-activation). The sequence of all bit activations identifies the image with a bit vector, which identifies it with a polyhedron in the decomposition and, in turn, identifies it with a vertex in the dual graph. We identify ReLU bits that are discriminators between non-adversarial and adversarial images and examine how well collections of these discriminators can ensemble vote to build an adversarial image detector. Specifically, we examine the similarities and differences of ReLU bit vectors for adversarial images, and their non-adversarial counterparts, using a pre-trained ResNet-50 architecture. While this paper focuses on adversarial digital images, ResNet-50 architecture, and the ReLU activation function, our methods extend to other network architectures, activation functions, and types of datasets.
translated by 谷歌翻译
多语言转移技术通常改善低资源机器翻译(MT)。这些技术中的许多是不考虑数据特征的情况下应用的。我们在海地对英语翻译的背景下显示,转移效率与知识共享语言之间的培训数据和关系数量相关。我们的实验表明,对于超出真实数据阈值的某些语言,反向翻译的增强方法是适得其反的,而从足够相关的语言中的跨语言转移则是优选的。我们通过贡献了基于规则的法国人行曲拼字和句法引擎以及一种新颖的语音嵌入方法来补充这一发现。当与多语言技术一起使用时,拼字法转换使对常规方法的统计学显着改善。在非常低的牙买加MT中,用传输语言进行矫正相似的代码转换可产生6.63的BLEU点优势。
translated by 谷歌翻译
我们提出了一个基于按键的对象级别的SLAM框架,该框架可以为对称和不对称对象提供全球一致的6DOF姿势估计。据我们所知,我们的系统是最早利用来自SLAM的相机姿势信息的系统之一,以提供先验知识,以跟踪对称对象的关键点 - 确保新测量与当前的3D场景一致。此外,我们的语义关键点网络经过训练,可以预测捕获预测的真实错误的关键点的高斯协方差,因此不仅可以作为系统优化问题中残留物的权重,而且还可以作为检测手段有害的统计异常值,而无需选择手动阈值。实验表明,我们的方法以6DOF对象姿势估算和实时速度为最先进的状态提供了竞争性能。我们的代码,预培训模型和关键点标签可用https://github.com/rpng/suo_slam。
translated by 谷歌翻译
在低收入环境中,电力公用事业最关键的信息是客户的预期消耗。在环境中难以做的电力消耗评估,其中大部分家庭尚未拥有电连接。在这种情况下,预期消费的绝对水平可以从5-100千瓦时/月的范围内,导致这些客户之间的可变性。如果在具有较高消耗的人的那些上,珍贵的资源是有利害的。这是第一次研究它在低收入环境中的善意,试图预测建筑物的消费,而不是总行政区域的消费。我们通过在肯尼亚20,000个地理参考电力客户的情况下培训卷积神经网络(CNN)上电气化的白天卫星图像,其中包括20,000个地理参考电力客户(肯尼亚住宅客户的0.01%)。这是可能的两阶段方法,它使用了一种新的建筑分割方法来利用更大的不成本卫星图像耗尽,以充分利用大多数稀缺和昂贵的客户数据。我们的方法表明,竞争精度可以在建筑水平上实现,解决消费变异性的挑战。这项工作表明,建筑的特征和周围的上下文在预测消费水平方面都很重要。我们还评估将较低分辨率的地理空间数据集添加到培训过程中,包括夜间灯和人口普查数据。结果已经有助于通知网站选择和分配级别规划,通过肯尼亚各个结构水平的粒度预测,没有理由不能扩展到其他国家。
translated by 谷歌翻译
自主驾驶包括多个交互模块,其中每个模块必须与其他模块相反。通常,运动预测模块取决于稳健的跟踪系统以捕获每个代理的过去的移动。在这项工作中,我们系统地探讨了运动预测任务的跟踪模块的重要性,并且最终得出结论,整体运动预测性能对跟踪模块的缺陷非常敏感。我们明确比较了使用跟踪信息的模型,该模型不会跨越多种方案和条件。我们发现跟踪信息发挥着重要作用,并在无噪声条件下提高运动预测性能。然而,在跟踪噪声的情况下,如果没有彻底研究,它可能会影响整体性能。因此,我们应该在开发和测试运动/跟踪模块时注意到噪音,或者他们应该考虑跟踪自由替代品。
translated by 谷歌翻译
Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays [1]. NumPy is the primary array programming library for the Python language [2,3,4,5]. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves [6] and the first imaging of a black hole [7].Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries.
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck. These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays. Perhaps surprisingly, despite the surge of interest in large-scale, multi-agent reinforcement learning, almost nothing is known about the analogous question: Are common reinforcement learning (RL) algorithms also robust to similar perturbations? In this paper, we investigate this question by studying a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction, where a general compression operator is used to model the perturbation. Our main technical contribution is to show that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic theoretical guarantees as their SGD counterparts. We then extend our results significantly to nonlinear stochastic approximation algorithms and multi-agent settings. In particular, we prove that for multi-agent TD learning, one can achieve linear convergence speedups in the number of agents while communicating just $\tilde{O}(1)$ bits per agent at each time step. Our work is the first to provide finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling. Our analysis hinges on studying the drift of a novel Lyapunov function that captures the dynamics of a memory variable introduced by error feedback.
translated by 谷歌翻译