我们如何检测异常:也就是说,与给定的一组高维数据(例如图像或传感器数据)显着不同的样品?这是众多应用程序的实际问题,也与使学习算法对意外输入更强大的目标有关。自动编码器是一种流行的方法,部分原因是它们的简单性和降低维度的能力。但是,异常评分函数并不适应正常样品范围内重建误差的自然变化,这阻碍了它们检测实际异常的能力。在本文中,我们从经验上证明了局部适应性对具有真实数据的实验中异常评分的重要性。然后,我们提出了新颖的自适应重建基于错误的评分方法,该方法根据潜在空间的重建误差的局部行为来适应其评分。我们表明,这改善了各种基准数据集中相关基线的异常检测性能。
translated by 谷歌翻译
许多完善的异常检测方法使用样本到其本地附近的距离:所谓的“本地异常值”,例如LOF和DBSCAN。它们是他们在许多实际应用中的非结构化,基于功能的数据上的简单原则和强大性能的欢迎和强大的性能。但是,由于它们缺乏培训参数,他们无法学会适应特定的数据集。在本文中,我们首先统一本地异常方法,通过表明它们是图形神经网络中使用的更通用消息传递框架的特定情况。这使我们能够以神经网络的形式将可读性引入本地异常值方法,以获得更大的灵活性和表现:特别是,我们提出了新颖的基于图形神经网络的异常检测方法。 Lunar学会以可训练方式使用来自每个节点的最近邻居的信息来查找异常。我们表明,我们的方法比现有的本地异常值方法更好地表现出来,以及最先进的深线。我们还表明,我们的方法的性能更加强大,对本地邻域大小的不同设置更加强大。
translated by 谷歌翻译
我们解决了点云上以对象学习为中心的问题,这对于高级关系推理和可扩展的机器智能至关重要。特别是,我们引入了一个框架spair3d,将3D点云分解为空间混合模型,其中每个组件对应于一个对象。为了建模点云上的空间混合物模型,我们得出了倒角混合物损失,该混合物损失自然适合我们的变异训练管道。此外,我们采用了一个对象规范方案,该方案描述了每个对象相对于其本地素素网元单元的位置。这样的方案允许SPAIR3D使用任意数量的对象建模场景。我们评估了无监督场景分解任务的方法。实验结果表明,SPAIR3D具有强大的可伸缩性,并且能够以无监督的方式从点云中检测和分割未知数的对象。
translated by 谷歌翻译
This paper studies the quantization of heavy-tailed data in some fundamental statistical estimation problems, where the underlying distributions have bounded moments of some order. We propose to truncate and properly dither the data prior to a uniform quantization. Our major standpoint is that (near) minimax rates of estimation error are achievable merely from the quantized data produced by the proposed scheme. In particular, concrete results are worked out for covariance estimation, compressed sensing, and matrix completion, all agreeing that the quantization only slightly worsens the multiplicative factor. Besides, we study compressed sensing where both covariate (i.e., sensing vector) and response are quantized. Under covariate quantization, although our recovery program is non-convex because the covariance matrix estimator lacks positive semi-definiteness, all local minimizers are proved to enjoy near optimal error bound. Moreover, by the concentration inequality of product process and covering argument, we establish near minimax uniform recovery guarantee for quantized compressed sensing with heavy-tailed noise.
translated by 谷歌翻译
We present Self Meta Pseudo Labels, a novel semi-supervised learning method similar to Meta Pseudo Labels but without the teacher model. We introduce a novel way to use a single model for both generating pseudo labels and classification, allowing us to store only one model in memory instead of two. Our method attains similar performance to the Meta Pseudo Labels method while drastically reducing memory usage.
translated by 谷歌翻译
A challenge in spoken language translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we fine-tune a general-purpose, large language model to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. We compare to several segmentation strategies and find that our approach improves BLEU score on three languages by an average of 2.7 BLEU overall compared to an automatic punctuation baseline. Further, we demonstrate the effectiveness of two constrained decoding strategies to improve well-formedness of the model output from above 99% to 100%.
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multi-task learning, but also hinders the model's ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively.
translated by 谷歌翻译
Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
translated by 谷歌翻译
Edge-assisted vehicle-to-everything (V2X) motion planning is an emerging paradigm to achieve safe and efficient autonomous driving, since it leverages the global position information shared among multiple vehicles. However, due to the imperfect channel state information (CSI), the position information of vehicles may become outdated and inaccurate. Conventional methods ignoring the communication delays could severely jeopardize driving safety. To fill this gap, this paper proposes a robust V2X motion planning policy that adapts between competitive driving under a low communication delay and conservative driving under a high communication delay, and guarantees small communication delays at key waypoints via power control. This is achieved by integrating the vehicle mobility and communication delay models and solving a joint design of motion planning and power control problem via the block coordinate descent framework. Simulation results show that the proposed driving policy achieves the smallest collision ratio compared with other benchmark policies.
translated by 谷歌翻译