在本文中,我们考虑了神经视频压缩(NVC)中位分配的问题。由于帧参考结构,使用相同的R-D(速率)权衡参数$ \ lambda $的当前NVC方法是次优的,这带来了位分配的需求。与以前基于启发式和经验R-D模型的方法不同,我们建议通过基于梯度的优化解决此问题。具体而言,我们首先提出了一种基于半损坏的变异推理(SAVI)的连续位实现方法。然后,我们通过更改SAVI目标,使用迭代优化提出了一个像素级隐式分配方法。此外,我们基于NVC的可区分特征得出了精确的R-D模型。我们通过使用精确的R-D模型证明其等效性与位分配的等效性来展示我们的方法的最佳性。实验结果表明,我们的方法显着改善了NVC方法,并且胜过现有的位分配方法。我们的方法是所有可区分NVC方法的插件,并且可以直接在现有的预训练模型上采用。
translated by 谷歌翻译
我们考虑垂直逻辑回归(VLR)接受了迷你批次梯度下降训练,这种环境吸引了行业日益增长的兴趣,并被证明在包括金融和医学研究在内的广泛应用中很有用。我们在一系列开源联合学习框架中提供了对VLR的全面和严格的隐私分析,其中协议之间可能会有所不同,但是获得了获得本地梯度的过程。我们首先考虑了诚实而有趣的威胁模型,其中忽略了协议的详细实施,并且仅假定共享过程,我们将其作为甲骨文提取。我们发现,即使在这种一般环境下,在适当的批处理大小约束下,仍然可以从另一方恢复单维功能和标签,从而证明了遵循相同理念的所有框架的潜在脆弱性。然后,我们研究基于同态加密(HE)的协议的流行实例。我们提出了一种主动攻击,该攻击通过生成和压缩辅助密文来显着削弱对先前分析中批处理大小的约束。为了解决基于HE的协议中的隐私泄漏,我们基于差异隐私(DP)开发了一种简单的对策,并为更新的算法提供实用程序和隐私保证。最后,我们从经验上验证了我们对基准数据集的攻击和防御的有效性。总之,我们的发现表明,仅依靠他的所有垂直联合学习框架可能包含严重的隐私风险,而DP已经证明了其在水平联合学习中的力量,也可以在垂直环境中起着至关重要的作用,尤其是当耦合时使用HE或安全的多方计算(MPC)技术。
translated by 谷歌翻译
处理聚类问题在数据统计数据统计,模式识别和图像处理中很重要。平均换档算法是一种公共无监督算法,广泛用于解决聚类问题。然而,平均移位算法受其巨额计算资源成本的限制。在以前的研究[10]中,我们提出了一种新型GPU加速的更快的平均移位算法,这大大加快了余弦嵌入的聚类问题。在本研究中,我们扩展并改进了以前的算法来处理欧几里德距离度量。不同于传统的基于GPU的平均移位算法,我们的算法采用新颖的种子选择和早期停止方法,这大大提高了计算速度并降低了GPU存储器消耗。在仿真测试中,在处理200k点聚类问题时,与基于最先进的GPU的平均换档算法相比,我们的算法达到了3次加速度,具有优化的GPU存储器消耗。此外,在本研究中,我们实现了一种用于更快的平均移位算法的即插即用模型,可以轻松地部署。 (即插即用型号可用:https://github.com/masqm/faster-mean-shift-euc)
translated by 谷歌翻译
Despite its importance for federated learning, continuous learning and many other applications, on-device training remains an open problem for EdgeAI. The problem stems from the large number of operations (e.g., floating point multiplications and additions) and memory consumption required during training by the back-propagation algorithm. Consequently, in this paper, we propose a new gradient filtering approach which enables on-device DNN model training. More precisely, our approach creates a special structure with fewer unique elements in the gradient map, thus significantly reducing the computational complexity and memory consumption of back propagation during training. Extensive experiments on image classification and semantic segmentation with multiple DNN models (e.g., MobileNet, DeepLabV3, UPerNet) and devices (e.g., Raspberry Pi and Jetson Nano) demonstrate the effectiveness and wide applicability of our approach. For example, compared to SOTA, we achieve up to 19$\times$ speedup and 77.1% memory savings on ImageNet classification with only 0.1% accuracy loss. Finally, our method is easy to implement and deploy; over 20$\times$ speedup and 90% energy savings have been observed compared to highly optimized baselines in MKLDNN and CUDNN on NVIDIA Jetson Nano. Consequently, our approach opens up a new direction of research with a huge potential for on-device training.
translated by 谷歌翻译
Cybercriminals are moving towards zero-day attacks affecting resource-constrained devices such as single-board computers (SBC). Assuming that perfect security is unrealistic, Moving Target Defense (MTD) is a promising approach to mitigate attacks by dynamically altering target attack surfaces. Still, selecting suitable MTD techniques for zero-day attacks is an open challenge. Reinforcement Learning (RL) could be an effective approach to optimize the MTD selection through trial and error, but the literature fails when i) evaluating the performance of RL and MTD solutions in real-world scenarios, ii) studying whether behavioral fingerprinting is suitable for representing SBC's states, and iii) calculating the consumption of resources in SBC. To improve these limitations, the work at hand proposes an online RL-based framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day attacks in SBC. The framework considers behavioral fingerprinting to represent SBCs' states and RL to learn MTD techniques that mitigate each malicious state. It has been deployed on a real IoT crowdsensing scenario with a Raspberry Pi acting as a spectrum sensor. More in detail, the Raspberry Pi has been infected with different samples of command and control malware, rootkits, and ransomware to later select between four existing MTD techniques. A set of experiments demonstrated the suitability of the framework to learn proper MTD techniques mitigating all attacks (except a harmfulness rootkit) while consuming <1 MB of storage and utilizing <55% CPU and <80% RAM.
translated by 谷歌翻译
Model Predictive Controllers (MPC) are widely used for controlling cyber-physical systems. It is an iterative process of optimizing the prediction of the future states of a robot over a fixed time horizon. MPCs are effective in practice, but because they are computationally expensive and slow, they are not well suited for use in real-time applications. Overcoming the flaw can be accomplished by approximating an MPC's functionality. Neural networks are very good function approximators and are faster compared to an MPC. It can be challenging to apply neural networks to control-based applications since the data does not match the i.i.d assumption. This study investigates various imitation learning methods for using a neural network in a control-based environment and evaluates their benefits and shortcomings.
translated by 谷歌翻译
The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adversarial Table Perturbation (ATP) as a new attacking paradigm to measure the robustness of Text-to-SQL models. Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs. All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices. To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data. Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations. We release our benchmark and code at: https://github.com/microsoft/ContextualSP.
translated by 谷歌翻译
A practical issue of edge AI systems is that data distributions of trained dataset and deployed environment may differ due to noise and environmental changes over time. Such a phenomenon is known as a concept drift, and this gap degrades the performance of edge AI systems and may introduce system failures. To address this gap, a retraining of neural network models triggered by concept drift detection is a practical approach. However, since available compute resources are strictly limited in edge devices, in this paper we propose a lightweight concept drift detection method in cooperation with a recently proposed on-device learning technique of neural networks. In this case, both the neural network retraining and the proposed concept drift detection are done by sequential computation only to reduce computation cost and memory utilization. Evaluation results of the proposed approach shows that while the accuracy is decreased by 3.8%-4.3% compared to existing batch-based detection methods, it decreases the memory size by 88.9%-96.4% and the execution time by 1.3%-83.8%. As a result, the combination of the neural network retraining and the proposed concept drift detection method is demonstrated on Raspberry Pi Pico that has 264kB memory.
translated by 谷歌翻译
Accurate uncertainty quantification is necessary to enhance the reliability of deep learning models in real-world applications. In the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of deep learning models. Such PIs are useful or "high-quality'' as long as they are sufficiently narrow and capture most of the probability density. In this paper, we present a method to learn prediction intervals for regression-based neural networks automatically in addition to the conventional target predictions. In particular, we train two companion neural networks: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI. Our main contribution is the design of a loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean prediction interval width and ensuring the PI integrity using constraints that maximize the prediction interval probability coverage implicitly. Both objectives are balanced within the loss function using a self-adaptive coefficient. Furthermore, we apply a Monte Carlo-based approach that evaluates the model uncertainty in the learned PIs. Experiments using a synthetic dataset, six benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neural-network-based methods.
translated by 谷歌翻译
Recent studies on semi-supervised semantic segmentation (SSS) have seen fast progress. Despite their promising performance, current state-of-the-art methods tend to increasingly complex designs at the cost of introducing more network components and additional training procedures. Differently, in this work, we follow a standard teacher-student framework and propose AugSeg, a simple and clean approach that focuses mainly on data perturbations to boost the SSS performance. We argue that various data augmentations should be adjusted to better adapt to the semi-supervised scenarios instead of directly applying these techniques from supervised learning. Specifically, we adopt a simplified intensity-based augmentation that selects a random number of data transformations with uniformly sampling distortion strengths from a continuous space. Based on the estimated confidence of the model on different unlabeled samples, we also randomly inject labelled information to augment the unlabeled samples in an adaptive manner. Without bells and whistles, our simple AugSeg can readily achieve new state-of-the-art performance on SSS benchmarks under different partition protocols.
translated by 谷歌翻译