Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
Camera pose estimation is a key step in standard 3D reconstruction pipelines that operate on a dense set of images of a single object or scene. However, methods for pose estimation often fail when only a few images are available because they rely on the ability to robustly identify and match visual features between image pairs. While these methods can work robustly with dense camera views, capturing a large set of images can be time-consuming or impractical. We propose SparsePose for recovering accurate camera poses given a sparse set of wide-baseline images (fewer than 10). The method learns to regress initial camera poses and then iteratively refine them after training on a large-scale dataset of objects (Co3D: Common Objects in 3D). SparsePose significantly outperforms conventional and learning-based baselines in recovering accurate camera rotations and translations. We also demonstrate our pipeline for high-fidelity 3D reconstruction using only 5-9 images of an object.
translated by 谷歌翻译
Obtaining photorealistic reconstructions of objects from sparse views is inherently ambiguous and can only be achieved by learning suitable reconstruction priors. Earlier works on sparse rigid object reconstruction successfully learned such priors from large datasets such as CO3D. In this paper, we extend this approach to dynamic objects. We use cats and dogs as a representative example and introduce Common Pets in 3D (CoP3D), a collection of crowd-sourced videos showing around 4,200 distinct pets. CoP3D is one of the first large-scale datasets for benchmarking non-rigid 3D reconstruction "in the wild". We also propose Tracker-NeRF, a method for learning 4D reconstruction from our dataset. At test time, given a small number of video frames of an unseen object, Tracker-NeRF predicts the trajectories of its 3D points and generates new views, interpolating viewpoint and time. Results on CoP3D reveal significantly better non-rigid new-view synthesis performance than existing baselines.
translated by 谷歌翻译
尽管他们最近取得了成功,但在测试时遇到分配变化时,深层神经网络仍会继续表现不佳。最近,许多提出的方法试图通过将模型与推理之前的新分布对齐来解决。由于没有可用的标签,因此需要无监督的目标才能使模型适应观察到的测试数据。在本文中,我们提出了测试时间自我训练(测试):一种技术,该技术在测试时以某些源数据和新的数据分配为输入,并使用学生教师框架来学习不变且强大的表示形式。 。我们发现使用测试适应的模型可以显着改善基线测试时间适应算法。测试可以实现现代领域适应算法的竞争性能,同时自适应时访问5-10倍的数据。我们对两项任务进行了各种基准:对象检测和图像分割,并发现该模型适用于测试。我们发现测试设置了用于测试时间域适应算法的新最新技术。
translated by 谷歌翻译
离线强化学习利用大型数据集来训练政策而无需与环境进行互动。然后,可以在互动昂贵或危险的现实世界中部署学习的策略。当前算法过于拟合到训练数据集,并且在部署到环境外的分发概括时,因此表现不佳。我们的目标是通过学习Koopman潜在代表来解决这些限制,这使我们能够推断系统的潜在动态的对称性。然后利用后者在训练期间扩展其他静态离线数据集;这构成了一种新颖的数据增强框架,其反映了系统的动态,因此要被解释为对环境空间的探索。为了获得对称,我们采用Koopman理论,其中根据用于系统的测量功能空间的线性操作员表示非线性动力学,因此可以直接推断动力学的对称性。我们为对对称性的对称性的存在和性质提供了新的理论结果,这些控制系统如加强学习设置。此外,我们对我们的方法进行了多种基准脱机强化学习任务和数据集,包括D4RL,MetaWorld和RoboSuite,并通过使用我们的框架来始终如一地改善Q学习方法的最先进。
translated by 谷歌翻译
深度度量学习(DML)旨在找到适合于零拍摄传输到先验未知测试分布的表示。但是,公共评估协议仅测试单个固定数据拆分,其中列车和测试类被随机分配。更现实的评估应考虑广泛的分布转变,具有潜在的变化和困难。在这项工作中,我们系统地构建了增加难度的培训 - 测试分裂,并呈现OHLML基准,以在DML中的分发外换档下表征概括。 OODML旨在探讨更具挑战性的泛化性能,多样化的火车到测试分配换档。根据我们的新基准,我们对最先进的DML方法进行了彻底的实证分析。我们发现,虽然泛化趋于难以困难地降解,但随着分布偏移的增加,一些方法在保持性能方面更好。最后,我们提出了几次拍摄的DML作为一种有效的方法,以响应于OHML中呈现的未知测试班次而始终如一地改善泛化。此处可用的代码:https://github.com/compvis/charracterizing_generalization_in_dml。
translated by 谷歌翻译
从视觉数据中学习通过利用人类示范来开辟了大量的操纵行为,而不在数学上的数学上指定每个人,而是通过自然任务规范。在本文中,我们通过观看(LBW),通过模仿任务的单个视频来模仿策略学习的算法框架。我们方法的关键见解是两倍。首先,由于人类武器可能与机器人武器不具有相同的形态,因此我们的框架就会学会无监督的人类来机器人翻译,以克服形态不匹配问题。其次,为了捕获对学习状态表示至关重要的突出区域中的细节,我们的模型在翻译的机器人视频上执行无监督的关键点检测。检测到的关键点形成包含语义有意义信息的结构化表示,可以直接用于计算奖励和策略学习。我们在五个机器人操作任务中评估我们的LBW框架的有效性,包括到达,推动,滑动,咖啡制作和抽屉关闭。广泛的实验评估表明,我们的方法对最先进的方法有利地表现出。
translated by 谷歌翻译
Active learning aims to develop label-efficient algorithms by sampling the most representative queries to be labeled by an oracle. We describe a pool-based semisupervised active learning algorithm that implicitly learns this sampling mechanism in an adversarial manner. Unlike conventional active learning algorithms, our approach is task agnostic, i.e., it does not depend on the performance of the task for which we are trying to acquire labeled data. Our method learns a latent space using a variational autoencoder (VAE) and an adversarial network trained to discriminate between unlabeled and labeled data. The minimax game between the VAE and the adversarial network is played such that while the VAE tries to trick the adversarial network into predicting that all data points are from the labeled pool, the adversarial network learns how to discriminate between dissimilarities in the latent space. We extensively evaluate our method on various image classification and semantic segmentation benchmark datasets and establish a new state of the art on CIFAR10/100, Caltech-256, ImageNet, Cityscapes, and BDD100K. Our results demonstrate that our adversarial approach learns an effective low dimensional latent space in large-scale settings and provides for a computationally efficient sampling method. 1
translated by 谷歌翻译
and widely used information measurement metric, particularly popularized for SSVEP- based Brain-Computer (BCI) interfaces. By combining speed and accuracy into a single-valued parameter, this metric aids in the evaluation and comparison of various target identification algorithms across different BCI communities. To accurately depict performance and inspire an end-to-end design for futuristic BCI designs, a more thorough examination and definition of ITR is therefore required. We model the symbiotic communication medium, hosted by the retinogeniculate visual pathway, as a discrete memoryless channel and use the modified capacity expressions to redefine the ITR. We use graph theory to characterize the relationship between the asymmetry of the transition statistics and the ITR gain with the new definition, leading to potential bounds on data rate performance. On two well-known SSVEP datasets, we compared two cutting-edge target identification methods. Results indicate that the induced DM channel asymmetry has a greater impact on the actual perceived ITR than the change in input distribution. Moreover, it is demonstrated that the ITR gain under the new definition is inversely correlated with the asymmetry in the channel transition statistics. Individual input customizations are further shown to yield perceived ITR performance improvements. An algorithm is proposed to find the capacity of binary classification and further discussions are given to extend such results to ensemble techniques.We anticipate that the results of our study will contribute to the characterization of the highly dynamic BCI channel capacities, performance thresholds, and improved BCI stimulus designs for a tighter symbiosis between the human brain and computer systems while enhancing the efficiency of the underlying communication resources.
translated by 谷歌翻译
When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.
translated by 谷歌翻译