法医虹膜认可,而不是活着的Iris认可,是一个新兴的研究领域,它利用Iris Biometrics的判别能力来帮助人类检查员识别死者。作为一种主要是人为控制的任务,作为一种基于机器学习的技术,法医识别是在验证后识别任务中对人类专业知识的“备份”。因此,机器学习模型必须是(a)可解释的,并且(b)验尸特异性,以说明衰减眼组织的变化。在这项工作中,我们提出了一种满足需求的方法,并以人类感知的方式以一种新颖的方式接近验尸的创建。我们首先使用人类突出的图像区域的注释来训练基于学习的特征探测器,这是他们的决策。实际上,该方法直接从人类那里学习可解释的特征,而不是纯粹的数据驱动特征。其次,区域虹膜代码(同样,具有人体驱动的过滤内核)用于配对检测到的虹膜斑块,这些颗粒被转化为基于斑块的比较分数。通过这种方式,我们的方法为人类考官提供了人为理解的视觉提示,以证明身份决定和相应的置信度得分是合理的。当在259名死者的验尸虹膜图像的数据集上进行测试时,提出的三个最佳虹膜匹配者中提出的方法位置比商业(非人类互换)的Verieye方法更好。我们提出了一种独特的验尸后虹膜识别方法,该方法接受了人类显着性的培训,可以在法医检查的背景下提供完全解释的比较结果,从而实现最先进的识别表现。
translated by 谷歌翻译
通常,基于生物谱系的控制系统可能不依赖于各个预期行为或合作适当运行。相反,这种系统应该了解未经授权的访问尝试的恶意程序。文献中提供的一些作品建议通过步态识别方法来解决问题。这些方法旨在通过内在的可察觉功能来识别人类,尽管穿着衣服或配件。虽然该问题表示相对长时间的挑战,但是为处理问题的大多数技术存在与特征提取和低分类率相关的几个缺点,以及其他问题。然而,最近的深度学习方法是一种强大的一组工具,可以处理几乎任何图像和计算机视觉相关问题,为步态识别提供最重要的结果。因此,这项工作提供了通过步态认可的关于生物识别检测的最近作品的调查汇编,重点是深入学习方法,强调他们的益处,暴露出弱点。此外,它还呈现用于解决相关约束的数据集,方法和体系结构的分类和表征描述。
translated by 谷歌翻译
综合产生的内容的广泛扩散是一种需要紧急对策的严重威胁。合成含量的产生不限于多媒体数据,如视频,照片或音频序列,但涵盖了可以包括生物图像的显着大面积,例如西幕和微观图像。在本文中,我们专注于检测综合生成的西幕图像。生物医学文献在很大程度上探讨了西部污染图像,已经表明了如何通过目视检查或标准取证检测器轻松地伪造这些图像。为了克服缺乏公开可用的数据集,我们创建了一个包含超过14k原始的西幕图像和18K合成的Western-Blot图像的新数据集,由三种不同的最先进的生成方法产生。然后,我们调查不同的策略来检测合成的Western印迹,探索二进制分类方法以及单级探测器。在这两种情况下,我们从不利用培训阶段的合成纤维图像。所达到的结果表明,即使在这些科学图像的合成版本未优化利用检测器,综合生成的西幕图像也可以具有良好的精度。
translated by 谷歌翻译
The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
In the present work we propose an unsupervised ensemble method consisting of oblique trees that can address the task of auto-encoding, namely Oblique Forest AutoEncoders (briefly OF-AE). Our method is a natural extension of the eForest encoder introduced in [1]. More precisely, by employing oblique splits consisting in multivariate linear combination of features instead of the axis-parallel ones, we will devise an auto-encoder method through the computation of a sparse solution of a set of linear inequalities consisting of feature values constraints. The code for reproducing our results is available at https://github.com/CDAlecsa/Oblique-Forest-AutoEncoders.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Deep learning models are known to put the privacy of their training data at risk, which poses challenges for their safe and ethical release to the public. Differentially private stochastic gradient descent is the de facto standard for training neural networks without leaking sensitive information about the training data. However, applying it to models for graph-structured data poses a novel challenge: unlike with i.i.d. data, sensitive information about a node in a graph cannot only leak through its gradients, but also through the gradients of all nodes within a larger neighborhood. In practice, this limits privacy-preserving deep learning on graphs to very shallow graph neural networks. We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph. We develop three random-walk-based methods for generating such disjoint subgraphs and perform a careful analysis of the data-generating distributions to provide strong privacy guarantees. Through extensive experiments, we show that our method greatly outperforms the state-of-the-art baseline on three large graphs, and matches or outperforms it on four smaller ones.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译