2022-06-27

translated by 谷歌翻译

2022-06-18

translated by 谷歌翻译

2020-10-27

translated by 谷歌翻译

2022-02-12

translated by 谷歌翻译

2019-06-26
The phenomenon of benign overfitting is one of the key mysteries uncovered by deep learning methodology: deep neural networks seem to predict well, even with a perfect fit to noisy training data. Motivated by this phenomenon, we consider when a perfect fit to training data in linear regression is compatible with accurate prediction. We give a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization is in terms of two notions of the effective rank of the data covariance. It shows that overparameterization is essential for benign overfitting in this setting: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. By studying examples of data covariance properties that this characterization shows are required for benign overfitting, we find an important role for finite-dimensional data: the accuracy of the minimum norm interpolating prediction rule approaches the best possible accuracy for a much narrower range of properties of the data distribution when the data lies in an infinite dimensional space versus when the data lies in a finite dimensional space whose dimension grows faster than the sample size.
translated by 谷歌翻译

2021-04-28

translated by 谷歌翻译

2021-08-25

translated by 谷歌翻译

2021-09-29

translated by 谷歌翻译

2022-08-21

translated by 谷歌翻译

translated by 谷歌翻译

2021-12-08

translated by 谷歌翻译

2021-08-10

translated by 谷歌翻译

2020-08-03

translated by 谷歌翻译

translated by 谷歌翻译

2021-10-13

translated by 谷歌翻译

2021-06-17

translated by 谷歌翻译

2020-07-25

translated by 谷歌翻译

2020-06-17

translated by 谷歌翻译

2021-10-12

translated by 谷歌翻译

2022-06-15

translated by 谷歌翻译
\${authors}

\${pubdate}
\${abstract_cn}
translated by 谷歌翻译