Graph convolutional networks (GCNs) have achieved great success in graph representation learning by extracting high-level features from nodes and their topology. Since GCNs generally follow a message-passing mechanism, each node aggregates information from its first-order neighbour to update its representation. As a result, the representations of nodes with edges between them should be positively correlated and thus can be considered positive samples. However, there are more non-neighbour nodes in the whole graph, which provide diverse and useful information for the representation update. Two non-adjacent nodes usually have different representations, which can be seen as negative samples. Besides the node representations, the structural information of the graph is also crucial for learning. In this paper, we used quality-diversity decomposition in determinant point processes (DPP) to obtain diverse negative samples. When defining a distribution on diverse subsets of all non-neighbouring nodes, we incorporate both graph structure information and node representations. Since the DPP sampling process requires matrix eigenvalue decomposition, we propose a new shortest-path-base method to improve computational efficiency. Finally, we incorporate the obtained negative samples into the graph convolution operation. The ideas are evaluated empirically in experiments on node classification tasks. These experiments show that the newly proposed methods not only improve the overall performance of standard representation learning but also significantly alleviate over-smoothing problems.
translated by 谷歌翻译
Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在人工智能和音乐领域中,从歌词中产生旋律是一项有趣而又具有挑战性的任务。但是,保持输入歌词和生成旋律之间的一致性的困难限制了以前作品的发电质量。在我们的建议中,我们演示了我们提出的可解释的歌词到循环的生成系统,该系统可以与用户互动以了解生成过程并重新创建所需的歌曲。为了提高与歌词匹配的旋律生成的可靠性,相互利用以增强歌词和生成的旋律之间的一致性。利用Gumbel-Softmax来解决通过生成对抗网络(GAN)生成离散音乐属性的非差异性问题。此外,发电机的预测概率输出用于推荐音乐属性。与我们的歌词到旋律生成系统互动,用户可以收听生成的AI歌曲,并通过从推荐的音乐属性中选择来重新创建新歌。
translated by 谷歌翻译
随着多媒体技术的快速发展,增强现实(AR)已成为一个有希望的下一代移动平台。 AR的基本理论是人类的视觉混乱,它使用户可以通过将它们叠加在一起,同时感知现实世界的场景和增强内容(虚拟世界场景)。为了获得优质的经验(QOE),重要的是要了解两种情况之间的相互作用并和谐地显示AR内容。但是,关于这种叠加将如何影响人类视觉关注的研究。因此,在本文中,我们主要分析背景(BG)场景和AR内容之间的相互作用效果,并研究AR中的显着性预测问题。具体而言,我们首先在AR数据集(SARD)中构建显着性,其中包含450 bg图像,450次AR图像以及由叠加BG和AR图像产生的1350个叠加图像,并配对三个混合级别。在60个受试者中进行了大规模的眼睛跟踪实验,以收集眼动数据。为了更好地预测AR的显着性,我们提出了一种量化显着性预测方法,并将其推广为AR显着性预测。为了进行比较,提出并评估了三种基准方法,并与我们在沙德上提出的方法一起进行了评估。实验结果证明了我们提出的方法在常见的显着性预测问题和AR显着性预测问题上的优越性比基准方法的优势。我们的数据集和代码可在以下网址获得:https://github.com/duanhuiyu/arsality。
translated by 谷歌翻译
卷积神经网络(CNNS),例如时滞神经网络(TDNN),在学习扬声器嵌入方面已经示出了它们显着的能力。但是,它们同时在存储大小,处理和记忆中带来巨大的计算成本。发现符合特定约束的专业CNN需要努力的人类专家。与手工设计的方法相比,神经结构搜索(NAS)作为自动化手动架构设计过程的实用技术,并引起了对扬声器识别等口语处理任务的越来越兴趣。在本文中,我们提出了一种高效的架构搜索框架,该架构由基于TDNN的超网络和TDNN-NAS算法组成。该提出的超网络引入了从不同层的各种分辨率的不同范围的不同范围的时间卷积,并从不同层到TDNN。在其顶部,TDNN-NAS算法通过权重共享子网迅速搜索所需的TDNN架构,这令人惊讶地减少了处理具有各种资源要求的广大设备的计算。 VOXECEL数据集上的实验结果显示了所提出的效率,可以近似有关深度,内核和宽度的$ 10 ^ {13} $架构。考虑到不同的计算约束,它实现了2.20%的误差率(eer),具有204m的乘法累积操作(Mac),1.41%eer,具有571米Mac以及0.94%的eer,具有1.45g Mac。综合调查表明,训练有素的超空心概括了在培训期间未采样的子网,并在准确性和效率之间获得有利的权衡。
translated by 谷歌翻译
大多数现有的点云实例和语义分割方法在很大程度上依赖于强大的监督信号,这需要场景中每个点的点级标签。但是,这种强大的监督遭受了巨大的注释成本,引起了研究有效注释的需求。在本文中,我们发现实例的位置对实例和语义3D场景细分都很重要。通过充分利用位置,我们设计了一种弱监督的点云分割算法,该算法仅需要单击每个实例以指示其注释的位置。通过进行预处理过度分割,我们将这些位置注释扩展到seg级标签中。我们通过将未标记的片段分组分组到相关的附近标签段中,进一步设计一个段分组网络(SEGGROUP),以在SEG级标签下生成点级伪标签,以便现有的点级监督的分段模型可以直接消耗这些PSEUDO标签为了训练。实验结果表明,我们的SEG级监督方法(SEGGROUP)通过完全注释的点级监督方法获得了可比的结果。此外,在固定注释预算的情况下,它的表现优于最近弱监督的方法。
translated by 谷歌翻译
随着传感技术的进步,多元时间序列分类(MTSC)最近受到了相当大的关注。基于深度学习的MTSC技术主要依赖于卷积或经常性神经网络,主要涉及单时间序列的时间依赖性。结果,他们努力直接在多变量变量中表达成对依赖性。此外,基于图形神经网络(GNNS)的当前空间 - 时间建模(例如,图形分类)方法本质上是平的,并且不能以分层方式聚合集线器数据。为了解决这些限制,我们提出了一种基于新的图形汇集框架MTPOOL,以获得MTS的表现力全球表示。我们首先通过采用通过图形结构学习模块的相互作用来将MTS切片转换为曲线图,并通过时间卷积模块获得空间 - 时间图节点特征。为了获得全局图形级表示,我们设计了基于“编码器 - 解码器”的变形图池池模块,用于为群集分配创建自适应质心。然后我们将GNN和我们所提出的变分图层汇集层组合用于联合图表示学习和图形粗糙化,之后该图逐渐赋予一个节点。最后,可差异化的分类器将此粗糙的表示来获取最终预测的类。 10个基准数据集的实验表明MTPOOL优于MTSC任务中最先进的策略。
translated by 谷歌翻译
由于我们是婴儿,我们直观地发展了与视觉,音频和文本等不同认知传感器的输入相关联的能力。然而,在机器学习中,这种跨模型学习是一种非活动任务,因为不同的方式没有均匀性质。以前的作品发现,应该有不同的方式存在桥梁。从神经病学和心理学的角度来看,人类有能力将一种模态与另一个方式联系起来,例如,将一只鸟的图片与歌唱的唯一听证者相关联,反之亦然。机器学习算法是否可能恢复给定音频信号的场景?在本文中,我们提出了一种新型级联关注的残留甘(Car-GaN),旨在重建给定相应的音频信号的场景。特别地,我们介绍残留物模块,以逐渐降低不同方式之间的间隙。此外,具有新型分类损失函数的级联注意网络旨在解决跨模型学习任务。我们的模型在高级语义标签域中保持一致性,并且能够平衡两种不同的模式。实验结果表明,我们的模型在具有挑战性的子URMP数据集上实现了最先进的跨模型视听生成。代码将在https://github.com/tuffr5/car-gan中获得。
translated by 谷歌翻译