Large width limits of neural networks

{{Short description|Feature of artificial neural networks}}

File:Infinitely wide neural network.webm with two hidden layers, transforming a 3-dimensional input (bottom) into a two-dimensional output (y_1, y_2) (top). Right: output probability density function p(y_1, y_2) induced by the random weights of the network. Video: as the width of the network increases, the output distribution simplifies, ultimately converging to a Neural network Gaussian process in the infinite width limit.]]

Artificial neural networks are a class of models used in machine learning, and inspired by biological neural networks. They are the core component of modern deep learning algorithms. Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. Theoretical analysis of artificial neural networks sometimes considers the limiting case that layer width becomes large or infinite. This limit enables simple analytic statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. This wide layer limit is also of practical interest, since finite width neural networks often perform strictly better as layer width is increased.

{{Cite journal|last1=Novak|first1=Roman|last2=Bahri|first2=Yasaman|last3=Abolafia|first3=Daniel A.|last4=Pennington|first4=Jeffrey|last5=Sohl-Dickstein|first5=Jascha|date=2018-02-15|title=Sensitivity and Generalization in Neural Networks: an Empirical Study|url=https://openreview.net/forum?id=HJC2SzZCW|journal=International Conference on Learning Representations|arxiv=1802.08760|bibcode=2018arXiv180208760N}}

{{Cite journal|last1=Canziani|first1=Alfredo|last2=Paszke|first2=Adam|last3=Culurciello|first3=Eugenio|date=2016-11-04|title=An Analysis of Deep Neural Network Models for Practical Applications|url=https://openreview.net/forum?id=Bygq-H9eg|arxiv=1605.07678|bibcode=2016arXiv160507678C}}

{{cite journal |last1=Novak |first1=Roman |last2=Xiao |first2=Lechao |last3=Lee |first3=Jaehoon |last4=Bahri |first4=Yasaman |last5=Yang | first5=Greg |last6=Abolafia | first6=Dan | last7= Pennington |first7=Jeffrey |last8=Sohl-Dickstein |first8=Jascha |date=2018 |title=Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes |journal=International Conference on Learning Representations |arxiv=1810.05148 |bibcode=2018arXiv181005148N }}

{{Cite journal|last1=Neyshabur|first1=Behnam|last2=Li|first2=Zhiyuan|last3=Bhojanapalli|first3=Srinadh|last4=LeCun|first4=Yann|last5=Srebro|first5=Nathan|date=2019|title=Towards understanding the role of over-parametrization in generalization of neural networks|journal=International Conference on Learning Representations|arxiv=1805.12076|bibcode=2018arXiv180512076N}}

{{Cite journal|last1=Lawrence|first1=Steve|last2=Giles|first2=C. Lee|last3=Tsoi|first3=Ah Chung|date=1996|title=What size neural network gives optimal generalization? convergence properties of backpropagation|citeseerx=10.1.1.125.6019}}{{Cite journal|last=Bartlett|first=P.L.|date=1998|title=The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network|url=https://ieeexplore.ieee.org/document/661502|journal=IEEE Transactions on Information Theory|volume=44|issue=2|pages=525–536|doi=10.1109/18.661502|issn=1557-9654}}

__TOC__

Theoretical approaches based on a large width limit

  • The Neural Network Gaussian Process (NNGP) corresponds to the infinite width limit of Bayesian neural networks, and to the distribution over functions realized by non-Bayesian neural networks after random initialization.

{{Citation|last=Neal|first=Radford M.|chapter=Priors for Infinite Networks|date=1996|title=Bayesian Learning for Neural Networks|series=Lecture Notes in Statistics|volume=118|pages=29–53|publisher=Springer New York|doi=10.1007/978-1-4612-0745-0_2|isbn=978-0-387-94724-2}}

{{cite journal|last1=Lee|first1=Jaehoon|last2=Bahri|first2=Yasaman|last3=Novak|first3=Roman|last4=Schoenholz|first4=Samuel S.|last5=Pennington|first5=Jeffrey|last6=Sohl-Dickstein|first6=Jascha|date=2017|title=Deep Neural Networks as Gaussian Processes|journal=International Conference on Learning Representations|arxiv=1711.00165|bibcode=2017arXiv171100165L}}

{{cite journal |last1=G. de G. Matthews |first1=Alexander |last2=Rowland |first2=Mark |last3=Hron |first3=Jiri |last4=Turner |first4=Richard E. |last5=Ghahramani | first5=Zoubin |date=2017 |title=Gaussian Process Behaviour in Wide Deep Neural Networks |journal=International Conference on Learning Representations |arxiv=1804.11271 |bibcode=2018arXiv180411271M }}

{{cite journal |last1=Hron |first1=Jiri |last2=Bahri |first2=Yasaman |last3=Novak |first3=Roman |last4=Pennington |first4=Jeffrey |last5=Sohl-Dickstein | first5=Jascha |date=2020 |title=Exact posterior distributions of wide Bayesian neural networks |journal=ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning |arxiv=2006.10541}}

  • The same underlying computations that are used to derive the NNGP kernel are also used in deep information propagation to characterize the propagation of information about gradients and inputs through a deep network.

{{Cite journal|last1=Schoenholz|first1=Samuel S.|last2=Gilmer|first2=Justin|last3=Ganguli|first3=Surya|last4=Sohl-Dickstein|first4=Jascha|date=2016|title=Deep information propagation|journal=International Conference on Learning Representations|arxiv=1611.01232}}

This characterization is used to predict how model trainability depends on architecture and initializations hyper-parameters.

  • The Neural Tangent Kernel describes the evolution of neural network predictions during gradient descent training. In the infinite width limit the NTK usually becomes constant, often allowing closed form expressions for the function computed by a wide neural network throughout gradient descent training.

{{Cite journal|last1=Jacot| first1=Arthur| last2=Gabriel| first2=Franck| last3=Hongler| first3=Clement|title=Neural tangent kernel: Convergence and generalization in neural networks|date=2018|journal=Advances in Neural Information Processing Systems|arxiv=1806.07572}} The training dynamics essentially become linearized.{{Cite journal|last1=Lee|first1=Jaehoon|last2=Xiao|first2=Lechao|last3=Schoenholz|first3=Samuel S.|last4=Bahri|first4=Yasaman|last5=Novak|first5=Roman|last6=Sohl-Dickstein|first6=Jascha|last7=Pennington|first7=Jeffrey|title=Wide neural networks of any depth evolve as linear models under gradient descent|journal=Journal of Statistical Mechanics: Theory and Experiment|year=2020|volume=2020|issue=12|page=124002|doi=10.1088/1742-5468/abc62b|arxiv=1902.06720|bibcode=2020JSMTE2020l4002L|s2cid=62841516}}

  • Mean-field limit analysis, when applied to neural networks with weight scaling of \sim 1/h instead of \sim 1/\sqrt{h} and large enough learning rates, predicts qualitatively distinct nonlinear training dynamics compared to the static linear behavior described by the fixed neural tangent kernel, suggesting alternative pathways for understanding infinite-width networks.{{Cite book|last=Mei, Song Montanari, Andrea Nguyen, Phan-Minh|title=A Mean Field View of the Landscape of Two-Layers Neural Networks|date=2018-04-18|oclc=1106295873}}

{{Cite arXiv|last1=Nguyen| first1=Phan-Minh| last2=Pham| first2=Huy Tuan|title=A Rigorous Framework for the Mean Field Limit of Multilayer Neural Networks|date=2020| class=cs.LG|eprint=2001.11443}}

  • Catapult dynamics describe neural network training dynamics in the case that logits diverge to infinity as the layer width is taken to infinity, and describe qualitative properties of early training dynamics.

{{cite arXiv|last1=Lewkowycz|first1=Aitor|last2=Bahri|first2=Yasaman|last3=Dyer|first3=Ethan|last4=Sohl-Dickstein|first4=Jascha|last5=Gur-Ari|first5=Guy|date=2020|title=The large learning rate phase of deep learning: the catapult mechanism|class=stat.ML|eprint=2003.02218}}

References