手语制作(SLP)旨在将语言的表达方式转化为手语的相应语言,例如基于骨架的标志姿势或视频。现有的SLP型号是自动回旋(AR)或非自动入口(NAR)。但是,AR-SLP模型在解码过程中遭受了回归对均值和误差传播的影响。 NSLP-G是一种基于NAR的模型,在某种程度上解决了这些问题,但会带来其他问题。例如,它不考虑目标符号长度,并且会遭受虚假解码启动的影响。我们通过知识蒸馏(KD)提出了一种新型的NAR-SLP模型,以解决这些问题。首先,我们设计一个长度调节器来预测生成的符号姿势序列的末端。然后,我们采用KD,该KD从预训练的姿势编码器中提取空间语言特征以减轻虚假解码的启动。广泛的实验表明,所提出的方法在特里切特的手势距离和背面翻译评估中都显着优于现有的SLP模型。
translated by 谷歌翻译
在线巨魔增加了社会成本,并对个人造成心理损害。随着自动化帐户利用机器人进行拖钓的扩散,目标个人用户很难在定量和定性上处理这种情况。为了解决这个问题,我们专注于自动化对抗巨魔的方法,因为对战斗巨魔的反应鼓励社区用户在不损害言论自由的情况下保持持续的讨论。为此,我们为自动反响应生成提出了一个新颖的数据集。特别是,我们构建了一个配对数据集,其中包括巨魔评论和使用标记的响应策略的反响应,该策略使我们的数据集中的模型可以通过根据指定策略改变反响应来生成响应。我们执行了三个任务来评估数据集的有效性,并通过自动和人类评估评估结果。在人类评估中,我们证明了数据集中微调的模型显示出策略控制的句子生成的性能有了显着改善。
translated by 谷歌翻译
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
translated by 谷歌翻译
Neural radiance fields (NeRF) have demonstrated the potential of coordinate-based neural representation (neural fields or implicit neural representation) in neural rendering. However, using a multi-layer perceptron (MLP) to represent a 3D scene or object requires enormous computational resources and time. There have been recent studies on how to reduce these computational inefficiencies by using additional data structures, such as grids or trees. Despite the promising performance, the explicit data structure necessitates a substantial amount of memory. In this work, we present a method to reduce the size without compromising the advantages of having additional data structures. In detail, we propose using the wavelet transform on grid-based neural fields. Grid-based neural fields are for fast convergence, and the wavelet transform, whose efficiency has been demonstrated in high-performance standard codecs, is to improve the parameter efficiency of grids. Furthermore, in order to achieve a higher sparsity of grid coefficients while maintaining reconstruction quality, we present a novel trainable masking approach. Experimental results demonstrate that non-spatial grid coefficients, such as wavelet coefficients, are capable of attaining a higher level of sparsity than spatial grid coefficients, resulting in a more compact representation. With our proposed mask and compression pipeline, we achieved state-of-the-art performance within a memory budget of 2 MB. Our code is available at https://github.com/daniel03c1/masked_wavelet_nerf.
translated by 谷歌翻译
Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noise, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.
translated by 谷歌翻译
癌症护理中的治疗决策受到随机对照试验(RCT)的治疗效应估计的指导。 RCT估计在某个人群中,一种治疗与另一种治疗的平均效应。但是,治疗可能对人群中的每个患者都不同样有效。了解针对特定患者和肿瘤特征量身定制的治疗的有效性将实现个性化的治疗决策。通过平均RCT中不同患者亚组的结果来获得量身定制的治疗效果,需要大量的患者在所有相关亚组中具有足够的统计能力,以实现所有可能的治疗。美国癌症联合委员会(AJCC)建议研究人员开发结果预测模型(OPMS),以实现个性化治疗决策。 OPM有时称为风险模型或预后模型,使用患者和肿瘤特征来预测患者的结局,例如总体生存。假设这些预测对于使用“只有在OPM预测患者具有高复发风险的情况下开出化学疗法的规则”之类的规则,对治疗决策有用。 AJCC认识到可靠预测的重要性,发布了OPM的清单,以确保设计OPM设计的患者群体的可靠OPM预测准确性。但是,准确的结果预测并不意味着这些预测会产生良好的治疗决策。从这个角度来看,我们表明OPM依靠固定的治疗政策,这意味着被发现可以准确预测验证研究结果的OPM在用于治疗决策的情况下仍会导致患者伤害。然后,我们提供有关如何开发对个性化治疗决策有用的模型以及如何评估模型是否具有决策价值的指导。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
视觉预训练的最新进展表明,在不同的视觉任务中表现出惊人的表现,阐明了对人工智能研究中对视觉和文本概念的全面理解的长期问题。但是,在医学领域的视觉预训练的应用方面取得了有限数量和多样性阻碍了对联合视觉语言概念的成功学习。在这项研究中,我们介绍了Max-VL,这是一种针对医疗领域中有效视觉预训练的模型。我们在实验上证明,预先训练的MAX-VL模型在各种视觉任务中都优于当前最新视觉语言模型。我们还提出了用于诊断新出现疾病和人为错误检测的临床实用性,并显示了该模型在不同领域数据中的广泛适用性。
translated by 谷歌翻译
神经领域已成为一种新的数据表示范式,并在各种信号表示中表现出了显着的成功。由于它们在网络参数中保留信号,因此通过发送和接收整个模型参数来传输数据传输,可以防止在许多实际情况下使用这种新兴技术。我们提出了流媒体神经场,这是一个由各种宽度的可执行子网络组成的单个模型。拟议的建筑和培训技术使一个网络能够随着时间的流逝而流式传输,并重建不同的素质和一部分信号。例如,较小的子网络会产生光滑和低频信号,而较大的子网络可以代表细节。实验结果显示了我们方法在各个域中的有效性,例如2D图像,视频和3D签名的距离函数。最后,我们证明我们提出的方法通过利用参数共享来提高培训稳定性。
translated by 谷歌翻译
神经网络量化旨在将特定神经网络的高精度权重和激活转变为低精度的权重/激活,以减少存储器使用和计算,同时保留原始模型的性能。但是,紧凑设计的主链体系结构(例如Mobilenets)通常用于边缘设备部署的极端量化(1位重量/1位激活)会导致严重的性能变性。本文提出了一种新颖的量化感知训练(QAT)方法,即使通过重点关注各层之间的权重之间的重量间依赖性,也可以通过极端量化有效地减轻性能退化。为了最大程度地减少每个重量对其他重量的量化影响,我们通过训练一个依赖输入依赖性的相关矩阵和重要性向量来对每一层的权重进行正交转换,从而使每个权重都与其他权重分开。然后,我们根据权重量化的重要性来最大程度地减少原始权重/激活中信息丢失的重要性。我们进一步执行从底层到顶部的渐进层量化,因此每一层的量化都反映了先前层的权重和激活的量化分布。我们验证了我们的方法对各种基准数据集的有效性,可针对强神经量化基线,这表明它可以减轻ImageNet上的性能变性,并成功地保留了CIFAR-100上具有紧凑型骨干网络的完整精确模型性能。
translated by 谷歌翻译