We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint. A common approach is to place a Gaussian process prior on the unknown constraint and allow evaluations only in regions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter. In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate. Our approach is naturally applicable to continuous domains and does not require additional hyperparameters. We theoretically analyze the method and show that we do not violate the safety constraint with high probability and that we explore by learning about the constraint up to arbitrary precision. Empirical evaluations demonstrate improved data-efficiency and scalability.
translated by 谷歌翻译
可以部署一组合作的空中机器人,以有效地巡逻地形,每个机器人都会在指定区域飞行,并定期与邻居共享信息,以保护或监督它。为了确保鲁棒性,以前对这些同步系统的作品提出了将机器人发送到相邻区域的情况,以防它检测到故障。为了处理不可预测性并提高确定性巡逻计划的效率,本文提出了随机策略,以涵盖在代理之间分配的领域。首先,在本文中针对两个指标进行了对随机过程的理论研究:\ emph {闲置时间},这是两个连续观察到地形的任何点和\ emph {隔离时间}之间的预期时间,预期的时间},预期的时间机器人没有与任何其他机器人通信的时间。之后,将随机策略与添加另一个指标的确定性策略进行了比较:\ emph {广播时间},从机器人发出消息的那一刻,直到团队的所有其他机器人收到消息。模拟表明,理论结果与模拟和随机策略的表现非常吻合,其行为与文献中提出的确定性协议获得的行为相比。
translated by 谷歌翻译
Recently, there has been an interest in improving the resources available in Intrusion Detection System (IDS) techniques. In this sense, several studies related to cybersecurity show that the environment invasions and information kidnapping are increasingly recurrent and complex. The criticality of the business involving operations in an environment using computing resources does not allow the vulnerability of the information. Cybersecurity has taken on a dimension within the universe of indispensable technology in corporations, and the prevention of risks of invasions into the environment is dealt with daily by Security teams. Thus, the main objective of the study was to investigate the Ensemble Learning technique using the Stacking method, supported by the Support Vector Machine (SVM) and k-Nearest Neighbour (kNN) algorithms aiming at an optimization of the results for DDoS attack detection. For this, the Intrusion Detection System concept was used with the application of the Data Mining and Machine Learning Orange tool to obtain better results
translated by 谷歌翻译
Machine Learning models capable of handling the large datasets collected in the financial world can often become black boxes expensive to run. The quantum computing paradigm suggests new optimization techniques, that combined with classical algorithms, may deliver competitive, faster and more interpretable models. In this work we propose a quantum-enhanced machine learning solution for the prediction of credit rating downgrades, also known as fallen-angels forecasting in the financial risk management field. We implement this solution on a neutral atom Quantum Processing Unit with up to 60 qubits on a real-life dataset. We report competitive performances against the state-of-the-art Random Forest benchmark whilst our model achieves better interpretability and comparable training times. We examine how to improve performance in the near-term validating our ideas with Tensor Networks-based numerical simulations.
translated by 谷歌翻译
Histopathology imaging is crucial for the diagnosis and treatment of skin diseases. For this reason, computer-assisted approaches have gained popularity and shown promising results in tasks such as segmentation and classification of skin disorders. However, collecting essential data and sufficiently high-quality annotations is a challenge. This work describes a pipeline that uses suspected melanoma samples that have been characterized using Multi-Epitope-Ligand Cartography (MELC). This cellular-level tissue characterisation is then represented as a graph and used to train a graph neural network. This imaging technology, combined with the methodology proposed in this work, achieves a classification accuracy of 87%, outperforming existing approaches by 10%.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The library scikit-fda is a Python package for Functional Data Analysis (FDA). It provides a comprehensive set of tools for representation, preprocessing, and exploratory analysis of functional data. The library is built upon and integrated in Python's scientific ecosystem. In particular, it conforms to the scikit-learn application programming interface so as to take advantage of the functionality for machine learning provided by this package: pipelines, model selection, and hyperparameter tuning, among others. The scikit-fda package has been released as free and open-source software under a 3-Clause BSD license and is open to contributions from the FDA community. The library's extensive documentation includes step-by-step tutorials and detailed examples of use.
translated by 谷歌翻译
安全可靠的自主驾驶堆栈(AD)的设计是我们时代最具挑战性的任务之一。预计这些广告将在具有完全自主权的高度动态环境中驱动,并且比人类更大的可靠性。从这个意义上讲,要高效,安全地浏览任意复杂的流量情景,广告必须具有预测周围参与者的未来轨迹的能力。当前的最新模型通常基于复发,图形和卷积网络,在车辆预测的背景下取得了明显的结果。在本文中,我们探讨了在生成模型进行运动预测中注意力的影响,考虑到物理和社会环境以计算最合理的轨迹。我们首先使用LSTM网络对过去的轨迹进行编码,该网络是计算社会背景的多头自我发言模块的输入。另一方面,我们制定了一个加权插值来计算最后一个观测框中的速度和方向,以便计算可接受的目标点,从HDMAP信息的可驱动的HDMAP信息中提取,这代表了我们的物理环境。最后,我们的发电机的输入是从多元正态分布采样的白噪声矢量,而社会和物理环境则是其条件,以预测可行的轨迹。我们使用Argoverse运动预测基准1.1验证我们的方法,从而实现竞争性的单峰结果。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
深度卷积神经网络(DCNNS)在面部识别方面已经达到了人类水平的准确性(Phillips等,2018),尽管目前尚不清楚它们如何准确地区分高度相似的面孔。在这里,人类和DCNN执行了包括相同双胞胎在内的具有挑战性的面貌匹配任务。参与者(n = 87)查看了三种类型的面孔图像:同一身份,普通冒名顶替对(来自相似人口组的不同身份)和双胞胎冒名顶替对(相同的双胞胎兄弟姐妹)。任务是确定对是同一个人还是不同的人。身份比较在三个观点区分条件下进行了测试:额叶至额叶,额叶至45度,额叶为90度。在每个观点 - 差异条件下评估了从双胞胎突变器和一般冒险者区分匹配的身份对的准确性。人类对于一般撞击对比双重射手对更准确,准确性下降,一对图像之间的观点差异增加。通过介绍给人类的同一图像对测试了经过训练的面部识别的DCNN(Ranjan等,2018)。机器性能反映了人类准确性的模式,但除了一种条件以外,所有人的性能都处于或尤其是所有人的表现。在所有图像对类型中,比较了人与机器的相似性得分。该项目级别的分析表明,在九种图像对类型中的六种中,人类和机器的相似性等级显着相关[范围r = 0.38至r = 0.63],这表明人类对面部相似性的感知和DCNN之间的一般协议。这些发现还有助于我们理解DCNN的表现,以区分高度介绍面孔,表明DCNN在人类或以上的水平上表现出色,并暗示了人类和DCNN使用的特征之间的均等程度。
translated by 谷歌翻译