在这项工作中,我们研究了缺少数据(ST-MISS)和离群值(强大的ST-MISS)的子空间跟踪问题。我们提出了一种新颖的算法,并为这两个问题提供了保证。与过去在该主题上的工作不同,当前的工作并不强加分段恒定的子空间变更假设。此外,所提出的算法比我们以前的工作要简单得多(使用较少的参数)。其次,我们将方法及其分析扩展到当数据联合到数据时,以及在$ k $对等点点和中心之间的信息交换时,可以证明解决这些问题。我们通过广泛的数值实验来验证理论主张。
translated by 谷歌翻译
Understanding the ambient scene is imperative for several applications such as autonomous driving and navigation. While obtaining real-world image data with per-pixel labels is challenging, existing accurate synthetic image datasets primarily focus on indoor spaces with fixed lighting and scene participants, thereby severely limiting their application to outdoor scenarios. In this work we introduce OmniHorizon, a synthetic dataset with 24,335 omnidirectional views comprising of a broad range of indoor and outdoor spaces consisting of buildings, streets, and diverse vegetation. Our dataset also accounts for dynamic scene components including lighting, different times of a day settings, pedestrians, and vehicles. Furthermore, we also demonstrate a learned synthetic-to-real cross-domain inference method for in-the-wild 3D scene depth and normal estimation method using our dataset. To this end, we propose UBotNet, an architecture based on a UNet and a Bottleneck Transformer, to estimate scene-consistent normals. We show that UBotNet achieves significantly improved depth accuracy (4.6%) and normal estimation (5.75%) compared to several existing networks such as U-Net with skip-connections. Finally, we demonstrate in-the-wild depth and normal estimation on real-world images with UBotNet trained purely on our OmniHorizon dataset, showing the promise of proposed dataset and network for scene understanding.
translated by 谷歌翻译
A large portion of today's world population suffer from vision impairments and wear prescription eyeglasses. However, eyeglasses causes additional bulk and discomfort when used with augmented and virtual reality headsets, thereby negatively impacting the viewer's visual experience. In this work, we remedy the usage of prescription eyeglasses in Virtual Reality (VR) headsets by shifting the optical complexity completely into software and propose a prescription-aware rendering approach for providing sharper and immersive VR imagery. To this end, we develop a differentiable display and visual perception model encapsulating display-specific parameters, color and visual acuity of human visual system and the user-specific refractive errors. Using this differentiable visual perception model, we optimize the rendered imagery in the display using stochastic gradient-descent solvers. This way, we provide prescription glasses-free sharper images for a person with vision impairments. We evaluate our approach on various displays, including desktops and VR headsets, and show significant quality and contrast improvements for users with vision impairments.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
本文介绍了一种简单的有效学习算法,用于一般顺序决策。该算法将探索的乐观与模型估计的最大似然估计相结合,因此被命名为OMLE。我们证明,Omle了解了多项式数量的样本中一系列非常丰富的顺序决策问题的近乎最佳策略。这个丰富的类别不仅包括大多数已知的基于模型的基于模型的强化学习(RL)问题(例如表格MDP,计算的MDP,低证人等级问题,表格弱弱/可观察到的POMDP和多步可解码的POMDP),但是同样,许多新的具有挑战性的RL问题,尤其是在可观察到的部分环境中,这些问题以前尚不清楚。值得注意的是,本文解决的新问题包括(1)具有连续观察和功能近似的可观察到的POMDP,在其中我们实现了完全独立于观察空间的第一个样品复杂性; (2)条件良好的低级顺序决策问题(也称为预测状态表示(PSRS)),其中包括并概括了所有已知的可牵引的POMDP示例,这些示例在更固有的表示下; (3)在帆条件下进行一般顺序决策问题,这统一了我们在完全可观察和部分可观察的设置中对基于模型的RL的现有理解。帆条件是由本文确定的,可以将其视为贝尔曼/证人等级的自然概括,以解决部分可观察性。
translated by 谷歌翻译
胃肠道癌被认为是胃肠道中器官的致命恶性状况。由于其死亡,迫切需要医学图像分割技术来分割器官以减少治疗时间并增强治疗。传统的分割技术取决于手工制作的功能,并且在计算上昂贵且效率低下。视觉变压器在许多图像分类和细分任务中都获得了巨大的知名度。为了从变形金刚的角度解决这个问题,我们引入了混合CNN-Transformer架构,以从图像分割不同的器官。所提出的解决方案具有健壮,可扩展性和计算有效的效率,骰子和JACCARD系数分别为0.79和0.72。拟议的解决方案还描述了基于深度学习的自动化的本质,以提高治疗的有效性
translated by 谷歌翻译
我们考虑了OOD概括的问题,其目标是训练在与训练分布不同的测试分布上表现良好的模型。已知深度学习模型在这种转变上是脆弱的,即使对于略有不同的测试分布,也可能遭受大量精度下降。我们提出了一种基于直觉的新方法 - 愚蠢的方法,即大量丰富特征的对抗性结合应提供鲁棒性。我们的方法仔细提炼了一位强大的老师的知识,该知识使用标准培训学习了几个判别特征,同时使用对抗性培训将其结合在一起。对标准的对抗训练程序进行了修改,以产生可以更好地指导学生的教师。我们评估DAFT在域床框架中的标准基准测试中,并证明DAFT比当前最新的OOD泛化方法取得了重大改进。 DAFT始终超过表现良好的ERM和蒸馏基线高达6%,对于较小的网络而言,其增长率更高。
translated by 谷歌翻译
差异隐私对于具有严格的隐私保证的统计和机器学习算法的现实部署至关重要。为了释放样品平均值,最早开发了差异隐私机制的统计查询。在几何统计中,样本fr \'echet均值代表了最基本的统计摘要之一,因为它概括了属于非线性歧管的数据的样本均值。本着这种精神,到目前为止,已经开发出差异隐私机制的唯一几何统计查询是用于释放样本fr \'echet的含义:最近提出了\ emph {riemannian laplace机制},以使FR私有化FR私有化\'echet的意思是完全riemannian歧管。在许多领域中,对称正定(SPD)矩阵的流形用于对数据空间进行建模,包括在隐私要求是关键的医学成像中。我们提出了一种新颖,简单且快速的机制 - \ emph {切线高斯机构} - 以计算赋予二型e echet的差异私有fr \'echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet含量均为ecly -eeclidean riemannian metric。我们表明,我们的新机制在当前和仅可用的基线方面就数据维度获得了二次实用性改进。我们的机制在实践中也更简单,因为它不需要任何昂贵的马尔可夫链蒙特卡洛(MCMC)采样,并且通过多个数量级的计算速度更快 - 如广泛的实验所证实。
translated by 谷歌翻译
语义相似性分析和建模是当今自然语言处理的许多开创性应用中的根本上广受好评的任务。由于顺序模式识别的感觉,许多神经网络(例如RNN和LSTMS)在语义相似性建模中都取得了令人满意的结果。但是,这些解决方案由于无法以非顺序方式处理信息而被认为效率低下,从而导致上下文提取不当。变形金刚由于其优势(例如非序列数据处理和自我注意力)而成为最先进的体系结构。在本文中,我们使用传统和基于变压器的技术对美国专利短语进行语义相似性分析和建模,以匹配数据集。我们在解码增强的BERT-DEBERTA的四个不同变体中进行实验,并通过执行K折交叉验证来增强其性能。实验结果表明,与传统技术相比,我们的方法学的性能提高,平均Pearson相关评分为0.79。
translated by 谷歌翻译
当客户具有不同的数据分布时,最新的联合学习方法的性能比其集中式同行差得多。对于神经网络,即使集中式SGD可以轻松找到同时执行所有客户端的解决方案,当前联合优化方法也无法收敛到可比的解决方案。我们表明,这种性能差异很大程度上可以归因于非概念性提出的优化挑战。具体来说,我们发现网络的早期层确实学习了有用的功能,但是最后一层无法使用它们。也就是说,适用于此非凸问题的联合优化扭曲了最终层的学习。利用这一观察结果,我们提出了一个火车征征训练(TCT)程序来避开此问题:首先,使用现成方法(例如FedAvg)学习功能;然后,优化从网络的经验神经切线核近似获得的共透性问题。当客户具有不同的数据时,我们的技术可在FMNIST上的准确性提高高达36%,而CIFAR10的准确性提高了 +37%。
translated by 谷歌翻译