本文描述了(r)ules(o)f(t)he(r)oad(a)dvisor,该代理提供了推荐的和可能从一组人级规则生成的动作。我们以形式和示例描述了Rotra的架构和设计。具体来说,我们使用Rotra正式化和实施英国“道路规则”,并描述如何将其纳入自动驾驶汽车中,从而可以内部推荐遵守道路规则。此外,根据《英国公路法典》(《道路规则》),规定规则是否必须采取行动,或者仅建议采取行动,以指示生成的可能的措施。利用该系统的好处包括能够适应不同司法管辖区的不同法规;允许从规则到行为的清晰可追溯性,并提供外部自动责任机制,可以检查在某些给定情况下是否遵守规则。通过具体的示例,对自动驾驶汽车的模拟显示如何通过将自动驾驶汽车放置在许多情况下,这些场景测试了汽车遵守道路规则的能力。合并该系统的自动驾驶汽车能够确保他们遵守道路和外部(法律或监管机构的规则透明工作,从而使汽车公司,司法管辖区和公众之间的信任更大。
translated by 谷歌翻译
与分析气相色谱法 - 质谱(GC -MS)数据相关的挑战很多。这些挑战中的许多挑战源于以下事实:电子电离可能使由于高度的分裂程度与分子离子信号的损失而难以恢复分子信息。使用GC-MS数据,通常在密切洗脱峰之间共享许多常见的片段离子,因此需要进行复杂的分析方法。其中一些方法是完全自动化的,但是对数据可以在分析过程中引入伪影的数据做出了一些假设。化学计量方法(例如多元曲线分辨率或平行因子分析)特别有吸引力,因为它们是灵活的,并且对数据的假设相对较少 - 理想情况下会导致伪像较少。这些方法确实需要专家用户干预来确定每个区域的最相关区域和适当数量的组件,即$ k $。需要选择自动化区域,以允许使用高级信号反卷积的色谱数据自动批处理处理。在这里,我们提出了一种新的方法,用于自动化,不靶心的感兴趣的选择区域,该方法是根据平方的比率和第二个单数值分解的比率来解释GC-MS数据中存在的多元信息,以选择感兴趣的区域。在色谱图上移动的窗口。假设第一个奇异值主要解释了信号,而第二个奇异值主要解释了噪声,则可以将这两个值之间的关系解释为Fisher比率的概率分布。通过研究该算法不再挑选已知包含信号的色谱区的浓度来测试算法的灵敏度。
translated by 谷歌翻译
许多现实世界的识别问题都有不平衡或长尾标签的分布。这些分布使表示形式学习更具挑战性,因为对尾巴类别的概括有限。如果测试分布与训练分布有所不同,例如统一与长尾,需要解决分配转移的问题。为此,最近的作品通过贝叶斯定理的启发,使用边缘修改扩展了SoftMax跨凝结。在本文中,我们通过专家的平衡产品(Balpoe)概括了几种方法,该方法结合了一个具有不同测试时间目标分布的模型家庭,以解决数据中的不平衡。拟议的专家在一个阶段进行培训,无论是共同还是独立的,并无缝融合到Balpoe中。我们表明,Balpoe是Fisher的一致性,可以最大程度地减少均衡误差并执行广泛的实验以验证我们的方法的有效性。最后,我们研究了在这种情况下混合的效果,发现正则化是学习校准专家的关键要素。我们的实验表明,正则化的BALPOE在测试准确性和校准指标上的表现非常出色,从而导致CIFAR-100-LT,Imagenet-LT和Inaturalist-2018数据集的最新结果。该代码将在纸质接受后公开提供。
translated by 谷歌翻译
转移学习提供了一种在学习另一个任务时从一个任务中利用知识的方式。执行转移学习通常涉及通过训练数据集上的梯度下降来迭代地更新模型的参数。在本文中,我们介绍了一种基本上不同的方法,用于将知识转移到跨模型,这些方法将多个模型“合并”成一个。我们的方法有效地涉及计算模型参数的加权平均值。我们表明,该平均值相当于从模型权重的后部的大致抽样。在某些情况下使用各向同性高斯近似时,我们还通过Fisher信息近似于精确矩阵来证明优势。总之,我们的方法使得与基于标准梯度的培训相比,可以以极低的计算成本将多种模型中的“知识”组合。我们展示了模型合并在中间任务培训和域适应问题上实现了基于梯度下降的转移学习的可比性。我们还表明,我们的合并程序使得可以以先前未开发的方式结合模型。为了测量我们方法的稳健性,我们对我们算法的设计进行了广泛的消融。
translated by 谷歌翻译
矩阵正常模型,高斯矩阵变化分布的系列,其协方差矩阵是两个较低尺寸因子的Kronecker乘积,经常用于模拟矩阵变化数据。张量正常模型将该家庭推广到三个或更多因素的Kronecker产品。我们研究了矩阵和张量模型中协方差矩阵的Kronecker因子的估计。我们向几个自然度量中的最大似然估计器(MLE)实现的误差显示了非因素界限。与现有范围相比,我们的结果不依赖于条件良好或稀疏的因素。对于矩阵正常模型,我们所有的所有界限都是最佳的对数因子最佳,对于张量正常模型,我们对最大因数和整体协方差矩阵的绑定是最佳的,所以提供足够的样品以获得足够的样品以获得足够的样品常量Frobenius错误。在与我们的样本复杂性范围相同的制度中,我们表明迭代程序计算称为触发器算法称为触发器算法的MLE的线性地收敛,具有高概率。我们的主要工具是Fisher信息度量诱导的正面矩阵的几何中的测地强凸性。这种强大的凸起由某些随机量子通道的扩展来决定。我们还提供了数值证据,使得将触发器算法与简单的收缩估计器组合可以提高缺乏采样制度的性能。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译