We present RecD (Recommendation Deduplication), a suite of end-to-end infrastructure optimizations across the Deep Learning Recommendation Model (DLRM) training pipeline. RecD addresses immense storage, preprocessing, and training overheads caused by feature duplication inherent in industry-scale DLRM training datasets. Feature duplication arises because DLRM datasets are generated from interactions. While each user session can generate multiple training samples, many features' values do not change across these samples. We demonstrate how RecD exploits this property, end-to-end, across a deployed training pipeline. RecD optimizes data generation pipelines to decrease dataset storage and preprocessing resource demands and to maximize duplication within a training batch. RecD introduces a new tensor format, InverseKeyedJaggedTensors (IKJTs), to deduplicate feature values in each batch. We show how DLRM model architectures can leverage IKJTs to drastically increase training throughput. RecD improves the training and preprocessing throughput and storage efficiency by up to 2.49x, 1.79x, and 3.71x, respectively, in an industry-scale DLRM training system.
translated by 谷歌翻译
我们有助于更好地理解由具有Relu激活和给定架构的神经网络表示的功能。使用来自混合整数优化,多面体理论和热带几何的技术,我们为普遍近似定理提供了数学逆向,这表明单个隐藏层足以用于学习任务。特别是,我们调查完全可增值功能是否完全可以通过添加更多层(没有限制大小)来严格增加。由于它为神经假设类别代表的函数类提供给算法和统计方面,这个问题对算法和统计方面具有潜在的影响。然而,据我们所知,这个问题尚未在神经网络文学中调查。我们还在这些神经假设类别中代表功能所需的神经网络的大小上存在上限。
translated by 谷歌翻译