我们提供了第一个子线性空间和次线性遗憾算法,用于在线学习,并通过专家建议(反对遗忘的对手),解决了Srinivas,Woodruff,Xu和Zhou最近提出的一个公开问题(STOC 2022)。我们还通过证明对自适应对手的任何子线性遗憾算法的线性记忆下限,证明了遗忘和(强)适应对手之间的分离。我们的算法基于一个新颖的泳池选择程序,该程序绕过了传统的在线学习领导者选择的智慧,以及将任何弱的子线性遗憾$ O(t)$算法转变为$ t^{1- \ alpha} $遗憾算法,这可能具有独立的利益。我们的下边界利用了零和游戏中无需重新学习和平衡计算的连接,从而证明了与自适应对手相对于自适应对手的强大界限。
translated by 谷歌翻译
大规模监督学习中的共同挑战是如何利用新的增量数据到预先训练的模型,而无需从头开始重新培训模型。受到这个问题的激励,我们重新审视动态最小二乘回归(LSR)的规范问题,其中目标是通过增量训练数据学习线性模型。在此设置,数据和标签$(\ mathbf {a} ^ {(t)},\ mathbf {b} ^ {(t)})\ in \ mathbb {r} ^ {t \ times d} \ times \ MathBB {R} ^ T $以在线方式发展($ t \ gg d $),目标是有效地将(近似)解决方案保持为$ \ min _ {\ mathbf {x} ^ {(t)}} \ | \ mathbf {a} ^ {(t)} \ mathbf {x} ^ {(t)} - \ mathbf {b} ^ {(t)} \ | \ | \ |在$中的所有$ t \。我们的主要结果是一种动态数据结构,它将任意小的恒定近似解,与摊销更新时间$ o(d ^ {1 + o(1)})$,几乎匹配静态的运行时间(草图 - 基于)解决方案。相比之下,对于精确的(甚至$ 1 / \ mathrm {poly}(n)$ - 准确性)解决方案,我们在静态和动态设置之间显示了分离,即动态LSR需要$ \ω(d ^ {2- O(1)})OMV猜想下的摊销更新时间(Henzinger等,STOC'15)。我们的数据结构在概念上简单,易于实施,并且在理论和实践中快速速度,通过对合成和现实世界数据集的实验进行了证实。
translated by 谷歌翻译
我们研究动态算法,以便在$ N $插入和删除流中最大化单调子模块功能的问题。我们显示任何维护$(0.5+ epsilon)$ - 在基数约束下的近似解决方案的算法,对于任何常数$ \ epsilon> 0 $,必须具有$ \ mathit {polynomial} $的摊销查询复杂性$ n $。此外,需要线性摊销查询复杂性,以维持0.584美元 - 批量的解决方案。这与近期[LMNF + 20,MON20]的最近动态算法相比,达到$(0.5- \ epsilon)$ - 近似值,与$ \ mathsf {poly} \ log(n)$摊销查询复杂性。在正面,当流是仅插入的时候,我们在基数约束下的问题和近似的Matroid约束下提供有效的算法,近似保证$ 1-1 / e-\ epsilon $和摊销查询复杂性$ \ smash {o (\ log(k / \ epsilon)/ \ epsilon ^ 2)} $和$ \ smash {k ^ {\ tilde {o}(1 / \ epsilon ^ 2)} \ log n} $,其中$ k $表示基数参数或Matroid的等级。
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural rendering. Motivated by the fact that informative point cloud features should be able to encode rich geometry and appearance cues and render realistic images, we train a point-cloud encoder within a devised point-based neural renderer by comparing the rendered images with real images on massive RGB-D data. The learned point-cloud encoder can be easily integrated into various downstream tasks, including not only high-level tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image synthesis. Extensive experiments on various tasks demonstrate the superiority of our approach compared to existing pre-training methods.
translated by 谷歌翻译
Collaboration among industrial Internet of Things (IoT) devices and edge networks is essential to support computation-intensive deep neural network (DNN) inference services which require low delay and high accuracy. Sampling rate adaption which dynamically configures the sampling rates of industrial IoT devices according to network conditions, is the key in minimizing the service delay. In this paper, we investigate the collaborative DNN inference problem in industrial IoT networks. To capture the channel variation and task arrival randomness, we formulate the problem as a constrained Markov decision process (CMDP). Specifically, sampling rate adaption, inference task offloading and edge computing resource allocation are jointly considered to minimize the average service delay while guaranteeing the long-term accuracy requirements of different inference services. Since CMDP cannot be directly solved by general reinforcement learning (RL) algorithms due to the intractable long-term constraints, we first transform the CMDP into an MDP by leveraging the Lyapunov optimization technique. Then, a deep RL-based algorithm is proposed to solve the MDP. To expedite the training process, an optimization subroutine is embedded in the proposed algorithm to directly obtain the optimal edge computing resource allocation. Extensive simulation results are provided to demonstrate that the proposed RL-based algorithm can significantly reduce the average service delay while preserving long-term inference accuracy with a high probability.
translated by 谷歌翻译
The traditional statistical inference is static, in the sense that the estimate of the quantity of interest does not affect the future evolution of the quantity. In some sequential estimation problems however, the future values of the quantity to be estimated depend on the estimate of its current value. This type of estimation problems has been formulated as the dynamic inference problem. In this work, we formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn according to a random model parameter. We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss. Moreover, learning for dynamic inference can serve as a meta problem, such that all familiar machine learning problems, including supervised learning, imitation learning and reinforcement learning, can be cast as its special cases or variants. Gaining a good understanding of this unifying meta problem thus sheds light on a broad spectrum of machine learning problems as well.
translated by 谷歌翻译
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships. However, this fundamental assumption cannot always be satisfied, as real-world graphs are always incomplete, noisy, or redundant. How to reveal the inherent graph structure in a unified way remains under-explored. We proposed PRI-GSL, a Graph Structure Learning framework guided by the Principle of Relevant Information, providing a simple and unified framework for identifying the self-organization and revealing the hidden structure. PRI-GSL learns a structure that contains the most relevant yet least redundant information quantified by von Neumann entropy and Quantum Jensen-Shannon divergence. PRI-GSL incorporates the evolution of quantum continuous walk with graph wavelets to encode node structural roles, showing in which way the nodes interplay and self-organize with the graph structure. Extensive experiments demonstrate the superior effectiveness and robustness of PRI-GSL.
translated by 谷歌翻译