这项工作将通用自适应控制应用于控制屏障功能,以实现安全集的正向不变性,尽管动态模型中无与伦比的参数不确定性。该方法结合了两个想法。首先是构建一个控制屏障功能系列,以确保系统对所有可能的模型安全。第二个是使用在线参数适应从允许集中选择一个控制屏障功能和相应的安全控制器。尽管这种组合并不一定会在没有屏障功能的其他要求的情况下产生向前的不变性,但我们表明可以通过简单地在线调整适应性增益来建立这种不变性。结果,这项工作代表了第一种自适应安全方法,该方法在不牺牲安全保证的情况下成功采用了确定性对等原则。
translated by 谷歌翻译
敏捷飞行或穿越不规则地形的激进运动会导致激光扫描中的运动失真,从而降低状态估计和映射。存在一些减轻这种效果的方法,但是对于资源受限的移动机器人来说,它们仍然太简单或计算成本高。为此,本文介绍了直接的激光惯性进程(DLIO),这是一种轻巧的激光惯性射击算法,采用新的粗到精细方法来构建连续的时间轨迹进行精确运动校正。我们方法的关键在于构建一组分析方程,这些方程仅通过时间来参数化,从而实现快速和可行的点。此方法之所以可行,仅仅是因为我们新颖的非线性几何观察者具有强大的收敛性能,该观察者提供了可证明正确的状态估计值来初始化敏感的IMU整合步骤。此外,通过同时执行运动校正和前期,并直接将每次扫描注册到地图并绕过扫描到扫描,DLIO的凝结体系结构在计算上的计算效率比当前最新的ART高20%精度提高12%。我们通过多种公共基准和自收集的数据集进行了广泛的测试,证明了DLIO的出色本地化精度,地图质量和较低的计算开销,与四种最先进的算法相比。
translated by 谷歌翻译
感知性挑战性环境中的现场机器人需要快速准确的状态估计,但现代LIDAR传感器迅速压倒电流算法算法。为此,本文介绍了一种轻质前端激光乐曲线液,具有一致和准确的本地化,用于计算限制的机器人平台。我们的直接激光探针内径(DLO)方法包括多个关键算法创新,该创新优先考虑计算效率并实现密集,最小预处理的点云,以实时提供准确的姿态估计。这是通过一种新型密钥帧系统来实现的,该系统还有效地管理历史地图信息,除了用于数据结构回收的快速点云登记的自定义迭代最近的点求解器之外。我们的方法更准确地具有比当前最先进的计算开销更准确,并且在空中和腿机器人的几个感知性挑战环境中广泛地评估了作为美国国家航空航天委员会队队的一部分是美国国家航空航天委员会的一部分的感知挑战性的环境,这是DARPA地铁的研究和开发工作的一部分挑战。
translated by 谷歌翻译
本文介绍了一个控制 - 理论框架,稳定地结合了在线学习的最佳反馈策略,以控制不确定的非线性系统。给定有界范围内的未知参数,所产生的自适应控制法保证闭环系统的融合到零成本的状态。在通过在线调整学习率设计最佳政策和价值函数时,拟议的框架能够采用确定性的等价原则 - 一种保证稳定学习和控制所需的机制。尽管存在参数不确定度,但熟悉的山地车问题证明了这种方法,在那里显示出近乎最佳的行为。
translated by 谷歌翻译
这项工作开发了一种新的直接自适应控制框架,将确定性等效原理扩展到具有无与伦比的模型不确定性的一般非线性系统。该方法在线调整适应速率,以消除参数估计瞬变对闭环稳定性的影响。如果已知相应的模型参数化Lyapunov函数或收缩度量,则该方法可以立即结合先前设计或学习的反馈策略。具有无与伦比的不确定性的各种非线性系统的仿真结果证明了这种方法。
translated by 谷歌翻译
在过去的几年中,连续的深度学习模型(称为神经普通微分方程(神经odes))受到了广泛关注。尽管它们迅速产生影响,但对于这些系统缺乏正式的分析技术。在本文中,我们考虑了具有不同架构和层次的一般神经odes类,并引入了一种新颖的可及性框架,可以对其行为进行正式分析。为神经ODE的可及性分析而开发的方法是在称为NNVODE的新工具中实现的。具体而言,我们的工作扩展了现有的神经网络验证工具以支持神经ODE。我们通过分析包括用于分类的神经ODE的一组基准以及控制和动态系统的一组基准来证明我们方法的功能和功效,包括评估我们方法对我们方法在现有软件工具中的功效和能力的评估。如果可以这样做,则连续的时间系统可达性文献。
translated by 谷歌翻译
As the number of heterogenous IP-connected devices and traffic volume increase, so does the potential for security breaches. The undetected exploitation of these breaches can bring severe cybersecurity and privacy risks. Anomaly-based \acp{IDS} play an essential role in network security. In this paper, we present a practical unsupervised anomaly-based deep learning detection system called ARCADE (Adversarially Regularized Convolutional Autoencoder for unsupervised network anomaly DEtection). With a convolutional \ac{AE}, ARCADE automatically builds a profile of the normal traffic using a subset of raw bytes of a few initial packets of network flows so that potential network anomalies and intrusions can be efficiently detected before they cause more damage to the network. ARCADE is trained exclusively on normal traffic. An adversarial training strategy is proposed to regularize and decrease the \ac{AE}'s capabilities to reconstruct network flows that are out-of-the-normal distribution, thereby improving its anomaly detection capabilities. The proposed approach is more effective than state-of-the-art deep learning approaches for network anomaly detection. Even when examining only two initial packets of a network flow, ARCADE can effectively detect malware infection and network attacks. ARCADE presents 20 times fewer parameters than baselines, achieving significantly faster detection speed and reaction time.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译