对使用深神经网络(DNN)来求解部分微分方程(PDE)的兴趣越来越大。尽管承诺这种方法成立,但在各个方面都可以改善它们。这两个缺点是(i)相对于经典数值方法的计算效率低下,以及(ii)训练有素的DNN模型的不泄露性。在这项工作中,我们介绍了Aspinn,这是我们早期工作的各向异性扩展,称为Spinn- sparse,物理信息和可解释的神经网络 - 解决解决这两个问题的PDE。 Aspinns概括了径向基函数网络。我们证明,使用涉及椭圆形和双曲线PDE的各种示例表明,我们提出的特殊体系结构比通用DNN更有效,而同时也可以直接解释。此外,由于每个节点的局部影响区域的各向异性,使用Aspinn比使用Spinn捕获溶液所需的较少的节点需要更少的节点来改进我们提出的Spinn模型。阿斯平的解释性转化为对其重量和偏见的现成可视化,从而对训练有素的模型的性质有了更多的了解。反过来,这提供了一个系统的过程,可以根据计算解决方案的质量改进体系结构。因此,Aspinns是经典数值算法与基于现代DNN的方法之间的有效桥梁,以求解PDES。在此过程中,我们还将Aspinns的培训简化为更接近监督学习算法的形式。
translated by 谷歌翻译
Search and Rescue (SAR) missions in remote environments often employ autonomous multi-robot systems that learn, plan, and execute a combination of local single-robot control actions, group primitives, and global mission-oriented coordination and collaboration. Often, SAR coordination strategies are manually designed by human experts who can remotely control the multi-robot system and enable semi-autonomous operations. However, in remote environments where connectivity is limited and human intervention is often not possible, decentralized collaboration strategies are needed for fully-autonomous operations. Nevertheless, decentralized coordination may be ineffective in adversarial environments due to sensor noise, actuation faults, or manipulation of inter-agent communication data. In this paper, we propose an algorithmic approach based on adversarial multi-agent reinforcement learning (MARL) that allows robots to efficiently coordinate their strategies in the presence of adversarial inter-agent communications. In our setup, the objective of the multi-robot team is to discover targets strategically in an obstacle-strewn geographical area by minimizing the average time needed to find the targets. It is assumed that the robots have no prior knowledge of the target locations, and they can interact with only a subset of neighboring robots at any time. Based on the centralized training with decentralized execution (CTDE) paradigm in MARL, we utilize a hierarchical meta-learning framework to learn dynamic team-coordination modalities and discover emergent team behavior under complex cooperative-competitive scenarios. The effectiveness of our approach is demonstrated on a collection of prototype grid-world environments with different specifications of benign and adversarial agents, target locations, and agent rewards.
translated by 谷歌翻译
In this paper, we propose and showcase, for the first time, monocular multi-view layout estimation for warehouse racks and shelves. Unlike typical layout estimation methods, MVRackLay estimates multi-layered layouts, wherein each layer corresponds to the layout of a shelf within a rack. Given a sequence of images of a warehouse scene, a dual-headed Convolutional-LSTM architecture outputs segmented racks, the front and the top view layout of each shelf within a rack. With minimal effort, such an output is transformed into a 3D rendering of all racks, shelves and objects on the shelves, giving an accurate 3D depiction of the entire warehouse scene in terms of racks, shelves and the number of objects on each shelf. MVRackLay generalizes to a diverse set of warehouse scenes with varying number of objects on each shelf, number of shelves and in the presence of other such racks in the background. Further, MVRackLay shows superior performance vis-a-vis its single view counterpart, RackLay, in layout accuracy, quantized in terms of the mean IoU and mAP metrics. We also showcase a multi-view stitching of the 3D layouts resulting in a representation of the warehouse scene with respect to a global reference frame akin to a rendering of the scene from a SLAM pipeline. To the best of our knowledge, this is the first such work to portray a 3D rendering of a warehouse scene in terms of its semantic components - Racks, Shelves and Objects - all from a single monocular camera.
translated by 谷歌翻译
Next-generation sequencing technologies have enhanced the scope of Internet-of-Things (IoT) to include genomics for personalized medicine through the increased availability of an abundance of genome data collected from heterogeneous sources at a reduced cost. Given the sheer magnitude of the collected data and the significant challenges offered by the presence of highly similar genomic structure across species, there is a need for robust, scalable analysis platforms to extract actionable knowledge such as the presence of potentially zoonotic pathogens. The emergence of zoonotic diseases from novel pathogens, such as the influenza virus in 1918 and SARS-CoV-2 in 2019 that can jump species barriers and lead to pandemic underscores the need for scalable metagenome analysis. In this work, we propose MG2Vec, a deep learning-based solution that uses the transformer network as its backbone, to learn robust features from raw metagenome sequences for downstream biomedical tasks such as targeted and generalized pathogen detection. Extensive experiments on four increasingly challenging, yet realistic diagnostic settings, show that the proposed approach can help detect pathogens from uncurated, real-world clinical samples with minimal human supervision in the form of labels. Further, we demonstrate that the learned representations can generalize to completely unrelated pathogens across diseases and species for large-scale metagenome analysis. We provide a comprehensive evaluation of a novel representation learning framework for metagenome-based disease diagnostics with deep learning and provide a way forward for extracting and using robust vector representations from low-cost next generation sequencing to develop generalizable diagnostic tools.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We present the design, development, and evaluation of HREyes: biomimetic communication devices which use light to communicate information and, for the first time, gaze direction from AUVs to humans. First, we introduce two types of information displays using the HREye devices: active lucemes and ocular lucemes. Active lucemes communicate information explicitly through animations, while ocular lucemes communicate gaze direction implicitly by mimicking human eyes. We present a human study in which our system is compared to the use of an embedded digital display that explicitly communicates information to a diver by displaying text. Our results demonstrate accurate recognition of active lucemes for trained interactants, limited intuitive understanding of these lucemes for untrained interactants, and relatively accurate perception of gaze direction for all interactants. The results on active luceme recognition demonstrate more accurate recognition than previous light-based communication systems for AUVs (albeit with different phrase sets). Additionally, the ocular lucemes we introduce in this work represent the first method for communicating gaze direction from an AUV, a critical aspect of nonverbal communication used in collaborative work. With readily available hardware as well as open-source and easily re-configurable programming, HREyes can be easily integrated into any AUV with the physical space for the devices and used to communicate effectively with divers in any underwater environment with appropriate visibility.
translated by 谷歌翻译
主动映射的传统方法专注于构建几何图。但是,对于大多数真实世界应用程序,可行的信息与环境中的语义有意义的对象有关。我们提出了一种用于主动度量语义映射问题的方法,该方法使多个异质机器人能够协作构建环境地图。这些机器人积极探索以最大程度地减少语义(对象分类)和几何(对象建模)信息中的不确定性。我们使用信息丰富但稀疏的对象模型表示环境,每个模型由基本形状和语义类标签组成,并使用大量现实世界数据在经验上表征不确定性。鉴于先前的地图,我们使用此模型为每个机器人选择动作以最大程度地减少不确定性。通过多种现实世界环境中的多机器人实验证明了我们的算法的性能。所提出的框架适用于广泛的现实问题,例如精确农业,基础设施检查和工厂中的资产映射。
translated by 谷歌翻译
在此评论中,我们为模糊C均值问题的“迭代重新加权算法”中提出了一个简单的替代推导。我们表明,对于IRW-FCM算法而得出的迭代步骤不过是流行的多数化最小化(MM)算法的步骤。本说明中提出的推导更简单明了,与IRW-FCM的推导不同,此处的推导不涉及引入任何辅助变量。此外,通过将IRW-FCM的步骤显示为MM算法,可以消除IRW-FCM算法的内环,并且可以有效地作为“单个环”算法运行算法。更确切地说,新的基于MM的推导推论IRW-FCM的单个内部环足够降低模糊C均值的目标函数,从而加快了IRW-FCM算法的速度。
translated by 谷歌翻译
对于医疗保健提供者提供适当的患者护理的准确和详细说明,包括患者时​​间表中的药物变化,至关重要。医疗保健提供者或患者本身可能会引发患者药物的改变。用药更改采用多种形式,包括处方药和相关剂量修饰。这些更改提供了有关患者整体健康以及导致当前护理的理由的信息。然后,未来的护理可以基于患者的最终状态。这项工作探讨了从自由文本临床注释中自动提取药物变化信息。上下文药物事件数据集(CMED)是临床注释的语料库,其注释可以通过多种变化相关的属性来表征药物变化,包括更改的类型(启动,停止,增加等),更改,时间性,时间性,时间性,时间性,时间性,时间。改变可能性和否定。使用CMED,我们确定了临床文本中的药物提及,并提出了三个新型的基于BERT的新型基于BERT的系统,以解决注释的药物变化特征。我们证明,我们建议的体系结构改善了对CMED的初始工作改善药物变更分类的性能。我们确定了0.959 F1的高性能的药物提及,我们提出的系统将药物变化及其属性分类为0.827 F1。
translated by 谷歌翻译
已发表的研究表明,基于性别的性别分类算法在性别竞赛组中存在偏见。具体而言,女性和黑皮肤的人获得了不平等的准确性率。为了减轻性别分类器的偏见,愿景社区已经制定了多种策略。但是,这些缓解策略的功效对于有限数量的种族证明了主要是高加索人和非裔美国人的功效。此外,这些策略通常在偏见和分类准确性之间提供权衡。为了进一步推进最先进的方法,我们利用生成观点,结构化学习和证据学习的力量来减轻性别分类偏见。我们通过广泛的实验验证来证明我们的偏见缓解策略在提高分类准确性和降低性别种族群体之间的偏见方面的优势,从而在内部和交叉数据集评估中取得了最新的性能。
translated by 谷歌翻译