每个自动驾驶数据集都有不同的传感器配置,源自不同的地理区域并涵盖各种情况。结果,3D检测器倾向于过度拟合他们的数据集。当在一个数据集上训练检测器并在另一个数据集上进行测试时,这会导致精度急剧下降。我们观察到激光扫描模式差异构成了这种降低性能的很大组成部分。我们通过设计一个新颖的以观看者为中心的表面完成网络(VCN)来完成我们的方法,以在无监督的域适应框架内完成感兴趣的对象表面,从而解决此问题。使用See-VCN,我们获得了跨数据集的对象的统一表示,从而使网络可以专注于学习几何形状,而不是过度拟合扫描模式。通过采用域不变表示,可以将SEE-VCN归类为一种多目标域适应方法,在该方法中无需注释或重新训练才能获得新的扫描模式的3D检测。通过广泛的实验,我们表明我们的方法在多个域适应设置中优于先前的域适应方法。我们的代码和数据可在https://github.com/darrenjkt/see-vcn上找到。
translated by 谷歌翻译
与人类驾驶相比,自动驾驶汽车有可能降低事故率。此外,这是自动车辆在过去几年中快速发展的动力。在高级汽车工程师(SAE)自动化级别中,车辆和乘客的安全责任从驾驶员转移到自动化系统,因此对这种系统进行彻底验证至关重要。最近,学术界和行业将基于方案的评估作为道路测试的互补方法,减少了所需的整体测试工作。在将系统的缺陷部署在公共道路上之前,必须确定系统的缺陷,因为没有安全驱动程序可以保证这种系统的可靠性。本文提出了基于强化学习(RL)基于场景的伪造方法,以在人行横道交通状况中搜索高风险场景。当正在测试的系统(SUT)不满足要求时,我们将场景定义为风险。我们的RL方法的奖励功能是基于英特尔的责任敏感安全性(RSS),欧几里得距离以及与潜在碰撞的距离。
translated by 谷歌翻译
最近的自动驾驶汽车(AV)技术包括机器学习和概率技术,这些技术为传统验证和验证方法增添了重大复杂性。在过去的几年中,研究社区和行业已广泛接受基于方案的测试。由于它直接关注相关的关键道路情况,因此可以减少测试所需的努力。编码现实世界流量参与者的行为对于在基于方案的测试中有效评估正在测试的系统(SUT)至关重要。因此,有必要从现实世界数据中捕获方案参数,这些参数可以在模拟中实际建模。本文的主要重点是确定有意义的参数列表,这些参数可以充分建模现实世界改变场景。使用这些参数,可以构建一个参数空间,能够为AV测试有效地生成一系列具有挑战性的方案。我们使用均方根误差(RMSE)验证我们的方法,以比较使用所提出的参数与现实世界轨迹数据生成的方案。除此之外,我们还证明,在一些场景参数中增加一些干扰可以产生不同的场景,并利用对责任敏感的安全(RSS)度量来衡量场景的风险。
translated by 谷歌翻译
不同制造商和激光雷达传感器模型之间的采样差异导致对象的不一致表示。当在其他类型的楣上测试为一个激光雷达培训的3D探测器时,这导致性能下降。 LIDAR制造业的显着进展使机械,固态和最近可调节的扫描图案LIDARS的进展带来了进展。对于后者,现有工作通常需要微调模型,每次调整扫描模式,这是不可行的。我们通过提出一种小型无监督的多目标域适配框架,明确地处理采样差异,参见,用于在固定和灵活的扫描图案Lidars上传送最先进的3D探测器的性能,而无需微调模型通过最终用户。我们的方法在将其传递到检测网络之前,将底层几何形状插值并将其从不同LIDAR的对象的扫描模式正常化。我们展示了在公共数据集上看到的有效性,实现最先进的结果,并另外为新颖的高分辨率LIDAR提供定量结果,以证明我们框架的行业应用。此数据集和我们的代码将公开可用。
translated by 谷歌翻译
The Elo algorithm, due to its simplicity, is widely used for rating in sports competitions as well as in other applications where the rating/ranking is a useful tool for predicting future results. However, despite its widespread use, a detailed understanding of the convergence properties of the Elo algorithm is still lacking. Aiming to fill this gap, this paper presents a comprehensive (stochastic) analysis of the Elo algorithm, considering round-robin (one-on-one) competitions. Specifically, analytical expressions are derived characterizing the behavior/evolution of the skills and of important performance metrics. Then, taking into account the relationship between the behavior of the algorithm and the step-size value, which is a hyperparameter that can be controlled, some design guidelines as well as discussions about the performance of the algorithm are provided. To illustrate the applicability of the theoretical findings, experimental results are shown, corroborating the very good match between analytical predictions and those obtained from the algorithm using real-world data (from the Italian SuperLega, Volleyball League).
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
Can we leverage the audiovisual information already present in video to improve self-supervised representation learning? To answer this question, we study various pretraining architectures and objectives within the masked autoencoding framework, motivated by the success of similar methods in natural language and image understanding. We show that we can achieve significant improvements on audiovisual downstream classification tasks, surpassing the state-of-the-art on VGGSound and AudioSet. Furthermore, we can leverage our audiovisual pretraining scheme for multiple unimodal downstream tasks using a single audiovisual pretrained model. We additionally demonstrate the transferability of our representations, achieving state-of-the-art audiovisual results on Epic Kitchens without pretraining specifically for this dataset.
translated by 谷歌翻译
Integrated information theory (IIT) is a theoretical framework that provides a quantitative measure to estimate when a physical system is conscious, its degree of consciousness, and the complexity of the qualia space that the system is experiencing. Formally, IIT rests on the assumption that if a surrogate physical system can fully embed the phenomenological properties of consciousness, then the system properties must be constrained by the properties of the qualia being experienced. Following this assumption, IIT represents the physical system as a network of interconnected elements that can be thought of as a probabilistic causal graph, $\mathcal{G}$, where each node has an input-output function and all the graph is encoded in a transition probability matrix. Consequently, IIT's quantitative measure of consciousness, $\Phi$, is computed with respect to the transition probability matrix and the present state of the graph. In this paper, we provide a random search algorithm that is able to optimize $\Phi$ in order to investigate, as the number of nodes increases, the structure of the graphs that have higher $\Phi$. We also provide arguments that show the difficulties of applying more complex black-box search algorithms, such as Bayesian optimization or metaheuristics, in this particular problem. Additionally, we suggest specific research lines for these techniques to enhance the search algorithm that guarantees maximal $\Phi$.
translated by 谷歌翻译
In this paper, we consider the problem where a drone has to collect semantic information to classify multiple moving targets. In particular, we address the challenge of computing control inputs that move the drone to informative viewpoints, position and orientation, when the information is extracted using a "black-box" classifier, e.g., a deep learning neural network. These algorithms typically lack of analytical relationships between the viewpoints and their associated outputs, preventing their use in information-gathering schemes. To fill this gap, we propose a novel attention-based architecture, trained via Reinforcement Learning (RL), that outputs the next viewpoint for the drone favoring the acquisition of evidence from as many unclassified targets as possible while reasoning about their movement, orientation, and occlusions. Then, we use a low-level MPC controller to move the drone to the desired viewpoint taking into account its actual dynamics. We show that our approach not only outperforms a variety of baselines but also generalizes to scenarios unseen during training. Additionally, we show that the network scales to large numbers of targets and generalizes well to different movement dynamics of the targets.
translated by 谷歌翻译
Atrial Fibrillation (AF) is characterized by disorganised electrical activity in the atria and is known to be sustained by the presence of regions of fibrosis (scars) or functional cellular remodeling, both of which may lead to areas of slow conduction. Estimating the effective conductivity of the myocardium and identifying regions of abnormal propagation is therefore crucial for the effective treatment of AF. We hypothesise that the spatial distribution of tissue conductivity can be directly inferred from an array of concurrently acquired contact electrograms (EGMs). We generate a dataset of simulated cardiac AP propagation using randomised scar distributions and a phenomenological cardiac model and calculate contact electrograms at various positions on the field. A deep neural network, based on a modified U-net architecture, is trained to estimate the location of the scar and quantify conductivity of the tissue with a Jaccard index of $91$%. We adapt a wavelet-based surrogate testing analysis to confirm that the inferred conductivity distribution is an accurate representation of the ground truth input to the model. We find that the root mean square error (RMSE) between the ground truth and our predictions is significantly smaller ($p_{val}=0.007$) than the RMSE between the ground truth and surrogate samples.
translated by 谷歌翻译