Weight initialization

{{Short description|Technique for setting initial values of trainable parameters in a neural network}}

{{Machine learning}}

In deep learning, weight initialization or parameter initialization describes the initial step in creating a neural network. A neural network contains trainable parameters that are modified during training: weight initialization is the pre-training step of assigning initial values to these parameters.

The choice of weight initialization method affects the speed of convergence, the scale of neural activation within the network, the scale of gradient signals during backpropagation, and the quality of the final model. Proper initialization is necessary for avoiding issues such as vanishing and exploding gradients and activation function saturation.

Note that even though this article is titled "weight initialization", both weights and biases are used in a neural network as trainable parameters, so this article describes how both of these are initialized. Similarly, trainable parameters in convolutional neural networks (CNNs) are called kernels and biases, and this article also describes these.

Constant initialization

We discuss the main methods of initialization in the context of a multilayer perceptron (MLP). Specific strategies for initializing other network architectures are discussed in later sections.

For an MLP, there are only two kinds of trainable parameters, called weights and biases. Each layer l contains a weight matrix W^{(l)} \in \R^{n_{l-1}\times n_l}and a bias vector b^{(l)} \in \R^{n_l}, where n_l is the number of neurons in that layer. A weight initialization method is an algorithm for setting the initial values for W^{(l)}, b^{(l)} for each layer l.

The simplest form is zero initialization:W^{(l)} = 0, b^{(l)} = 0Zero initialization is usually used for initializing biases, but it is not used for initializing weights, as it leads to symmetry in the network, causing all neurons to learn the same features.

In this page, we assume b = 0 unless otherwise stated.

Recurrent neural networks typically use activation functions with bounded range, such as sigmoid and tanh, since unbounded activation may cause exploding values. (Le, Jaitly, Hinton, 2015){{cite arXiv |eprint=1504.00941 |first1=Quoc V. |last1=Le |first2=Navdeep |last2=Jaitly |title=A Simple Way to Initialize Recurrent Networks of Rectified Linear Units |date=2015 |last3=Hinton |first3=Geoffrey E.|class=cs.NE }} suggested initializing weights in the recurrent parts of the network to identity and zero bias, similar to the idea of residual connections and LSTM with no forget gate.

In most cases, the biases are initialized to zero, though some situations can use a nonzero initialization. For example, in multiplicative units, such as the forget gate of LSTM, the bias can be initialized to 1 to allow good gradient signal through the gate.{{Cite journal |last1=Jozefowicz |first1=Rafal |last2=Zaremba |first2=Wojciech |last3=Sutskever |first3=Ilya |date=2015-06-01 |title=An Empirical Exploration of Recurrent Network Architectures |url=https://proceedings.mlr.press/v37/jozefowicz15.html |journal=Proceedings of the 32nd International Conference on Machine Learning |language=en |publisher=PMLR |pages=2342–2350}} For neurons with ReLU activation, one can initialize the bias to a small positive value like 0.1, so that the gradient is likely nonzero at initialization, avoiding the dying ReLU problem.{{Cite book |last1=Goodfellow |first1=Ian |title=Deep learning |last2=Bengio |first2=Yoshua |last3=Courville |first3=Aaron |date=2016 |publisher=The MIT Press |isbn=978-0-262-03561-3 |series=Adaptive computation and machine learning |location=Cambridge, Massachusetts}}{{Pg|page=305}}{{cite journal |arxiv=1903.06733 |first1=Lu |last1=Lu |first2=Yeonjong |last2=Shin |title=Dying ReLU and Initialization: Theory and Numerical Examples |date=2019 |last3=Su |first3=Yanhui |last4=Karniadakis |first4=George Em|journal=Communications in Computational Physics |volume=28 |issue=5 |pages=1671–1706 |doi=10.4208/cicp.OA-2020-0165 }}

Random initialization

Random initialization means sampling the weights from a normal distribution or a uniform distribution, usually independently.

= LeCun initialization =

LeCun initialization, popularized in (LeCun et al., 1998), is designed to preserve the variance of neural activations during the forward pass.

It samples each entry in W^{(l)} independently from a distribution with mean 0 and variance 1/n_{l-1}. For example, if the distribution is a continuous uniform distribution, then the distribution is \mathcal U(\pm \sqrt{3 / n_{l-1}}).

= Glorot initialization =

Glorot initialization (or Xavier initialization) was proposed by Xavier Glorot and Yoshua Bengio.{{Cite journal |last1=Glorot |first1=Xavier |last2=Bengio |first2=Yoshua |date=2010-03-31 |title=Understanding the difficulty of training deep feedforward neural networks |url=https://proceedings.mlr.press/v9/glorot10a |journal=Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics |language=en |publisher=JMLR Workshop and Conference Proceedings |pages=249–256}} It was designed as a compromise between two goals: to preserve activation variance during the forward pass and to preserve gradient variance during the backward pass.

For uniform initialization, it samples each entry in W^{(l)} independently and identically from \mathcal U(\pm \sqrt{6 / (n_{l+1} + n_{l-1})}). In the context, n_{l-1} is also called the "fan-in", and n_{l+1} the "fan-out". When the fan-in and fan-out are equal, then Glorot initialization is the same as LeCun initialization.

= He initialization =

As Glorot initialization performs poorly for ReLU activation,{{cite arXiv |eprint=1704.08863 |first1=Siddharth Krishna |last1=Kumar |title=On weight initialization in deep neural networks |date=2017|class=cs.LG }} He initialization (or Kaiming initialization) was proposed by Kaiming He et al.{{cite arXiv |last1=He |first1=Kaiming |last2=Zhang |first2=Xiangyu |last3=Ren |first3=Shaoqing |last4=Sun |first4=Jian |date=2015 |title=Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification |class=cs.CV |eprint=1502.01852}} for networks with ReLU activation. It samples each entry in W^{(l)} from \mathcal N(0, \sqrt{2 / n_{l-1}}) .

= Orthogonal initialization =

(Saxe et al. 2013){{cite arXiv |last1=Saxe |first1=Andrew M. |last2=McClelland |first2=James L. |last3=Ganguli |first3=Surya |date=2013 |title=Exact solutions to the nonlinear dynamics of learning in deep linear neural networks |class=cs.NE |eprint=1312.6120}} proposed orthogonal initialization: initializing weight matrices as uniformly random (according to the Haar measure) semi-orthogonal matrices, multiplied by a factor that depends on the activation function of the layer. It was designed so that if one initializes a deep linear network this way, then its training time until convergence is independent of depth.{{cite arXiv |eprint=2001.05992 |first1=Wei |last1=Hu |first2=Lechao |last2=Xiao |title=Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks |date=2020 |last3=Pennington |first3=Jeffrey|class=cs.LG }}

Sampling a uniformly random semi-orthogonal matrix can be done by initializing X by IID sampling its entries from a standard normal distribution, then calculate \left(X X^{\top}\right)^{-1 / 2} X or its transpose, depending on whether X is tall or wide.{{cite arXiv | eprint=2110.01765 | last1=Martens | first1=James | last2=Ballard | first2=Andy | last3=Desjardins | first3=Guillaume | last4=Swirszcz | first4=Grzegorz | last5=Dalibard | first5=Valentin | last6=Sohl-Dickstein | first6=Jascha | last7=Schoenholz | first7=Samuel S. | title=Rapid training of deep neural networks without skip connections or normalization layers using Deep Kernel Shaping | date=2021 | class=cs.LG }}

For CNN kernels with odd widths and heights, orthogonal initialization is done this way: initialize the central point by a semi-orthogonal matrix, and fill the other entries with zero. As an illustration, a kernel K of shape 3 \times 3 \times c \times c' is initialized by filling K[2,2,:,:] with the entries of a random semi-orthogonal matrix of shape c \times c', and the other entries with zero. (Balduzzi et al., 2017){{Cite journal |last1=Balduzzi |first1=David |last2=Frean |first2=Marcus |last3=Leary |first3=Lennox |last4=Lewis |first4=J. P. |last5=Ma |first5=Kurt Wan-Duo |last6=McWilliams |first6=Brian |date=2017-07-17 |title=The Shattered Gradients Problem: If resnets are the answer, then what is the question? |url=https://proceedings.mlr.press/v70/balduzzi17b.html |journal=Proceedings of the 34th International Conference on Machine Learning |language=en |publisher=PMLR |pages=342–350}} used it with stride 1 and zero-padding. This is sometimes called the Orthogonal Delta initialization.{{Cite journal |last1=Xiao |first1=Lechao |last2=Bahri |first2=Yasaman |last3=Sohl-Dickstein |first3=Jascha |last4=Schoenholz |first4=Samuel |last5=Pennington |first5=Jeffrey |date=2018-07-03 |title=Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks |url=https://proceedings.mlr.press/v80/xiao18a |journal=Proceedings of the 35th International Conference on Machine Learning |language=en |publisher=PMLR |pages=5393–5402|arxiv=1806.05393 }}

Related to this approach, unitary initialization proposes to parameterize the weight matrices to be unitary matrices, with the result that at initialization they are random unitary matrices (and throughout training, they remain unitary). This is found to improve long-sequence modelling in LSTM.{{Cite journal |last1=Arjovsky |first1=Martin |last2=Shah |first2=Amar |last3=Bengio |first3=Yoshua |date=2016-06-11 |title=Unitary Evolution Recurrent Neural Networks |url=https://proceedings.mlr.press/v48/arjovsky16.html |journal=Proceedings of the 33rd International Conference on Machine Learning |language=en |publisher=PMLR |pages=1120–1128|arxiv=1511.06464 }}{{cite arXiv |last1=Henaff |first1=Mikael |last2=Szlam |first2=Arthur |last3=LeCun |first3=Yann |title=Recurrent Orthogonal Networks and Long-Memory Tasks |date=2017-03-15 |class=cs.NE |eprint=1602.06662}}

Orthogonal initialization has been generalized to layer-sequential unit-variance (LSUV) initialization. It is a data-dependent initialization method, and can be used in convolutional neural networks. It first initializes weights of each convolution or fully connected layer with orthonormal matrices. Then, proceeding from the first to the last layer, it runs a forward pass on a random minibatch, and divides the layer's weights by the standard deviation of its output, so that its output has variance approximately 1.{{Citation |last1=Mishkin |first1=Dmytro |title=All you need is a good init |date=2016-02-19 |arxiv=1511.06422 |last2=Matas |first2=Jiri}}{{Cite conference|last1=Xie |first1=Di |last2=Xiong |first2=Jiang |last3=Pu |first3=Shiliang |date=2017 |conference=IEEE Conference on Computer Vision and Pattern Recognition (CVPR)|title=All You Need Is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks With Orthonormality and Modulation |url=https://openaccess.thecvf.com/content_cvpr_2017/html/Xie_All_You_Need_CVPR_2017_paper.html |pages=6176–6185}}

= Fixup initialization =

In 2015, the introduction of residual connections allowed very deep neural networks to be trained, much deeper than the ~20 layers of the previous state of the art (such as the VGG-19). Residual connections gave rise to their own weight initialization problems and strategies. These are sometimes called "normalization-free" methods, since using residual connection could stabilize the training of a deep neural network so much that normalizations become unnecessary.

Fixup initialization is designed specifically for networks with residual connections and without batch normalization, as follows:{{cite arXiv |eprint=1901.09321 |first1=Hongyi |last1=Zhang |first2=Yann N. |last2=Dauphin |title=Fixup Initialization: Residual Learning Without Normalization |date=2019 |last3=Ma |first3=Tengyu|class=cs.LG }}

  1. Initialize the classification layer and the last layer of each residual branch to 0.
  2. Initialize every other layer using a standard method (such as He initialization), and scale only the weight layers inside residual branches by L^{-\frac{1}{2 m-2}}.
  3. Add a scalar multiplier (initialized at 1) in every branch and a scalar bias (initialized at 0) before each convolution, linear, and element-wise activation layer.

Similarly, T-Fixup initialization is designed for Transformers without layer normalization.{{Cite journal |last1=Huang |first1=Xiao Shi |last2=Perez |first2=Felipe |last3=Ba |first3=Jimmy |last4=Volkovs |first4=Maksims |date=2020-11-21 |title=Improving Transformer Optimization Through Better Initialization |url=https://proceedings.mlr.press/v119/huang20f.html |journal=Proceedings of the 37th International Conference on Machine Learning |language=en |publisher=PMLR |pages=4475–4483}}{{Pg|page=9}}

= Others =

Instead of initializing all weights with random values on the order of O(1/\sqrt n), sparse initialization initialized only a small subset of the weights with larger random values, and the other weights zero, so that the total variance is still on the order of O(1).{{Cite journal |last=Martens |first=James |date=2010-06-21 |title=Deep learning via Hessian-free optimization |url=https://dl.acm.org/doi/10.5555/3104322.3104416 |journal=Proceedings of the 27th International Conference on International Conference on Machine Learning |series=ICML'10 |location=Madison, WI, USA |publisher=Omnipress |pages=735–742 |isbn=978-1-60558-907-7}}

Random walk initialization was designed for MLP so that during backpropagation, the L2 norm of gradient at each layer performs an unbiased random walk as one moves from the last layer to the first.{{cite arXiv |eprint=1412.6558 |first1=David |last1=Sussillo |first2=L. F. |last2=Abbott |title=Random Walk Initialization for Training Very Deep Feedforward Networks |date=2014|class=cs.NE }}

Looks linear initialization was designed to allow the neural network to behave like a deep linear network at initialization, since W\; \mathrm{ReLU}(x) - W\; \mathrm{ReLU}(-x) = Wx. It initializes a matrix W of shape \R^{\frac n2 \times m} by any method, such as orthogonal initialization, then let the \R^{n \times m} weight matrix to be the concatenation of W, -W.{{cite arXiv | eprint=1702.08591 | last1=Balduzzi | first1=David | last2=Frean | first2=Marcus | last3=Leary | first3=Lennox | last4=Lewis | first4=JP | author5=Kurt Wan-Duo Ma | last6=McWilliams | first6=Brian | title=The Shattered Gradients Problem: If resnets are the answer, then what is the question? | date=2017 | class=cs.NE }}

Miscellaneous

For hyperbolic tangent activation function, a particular scaling is sometimes used: 1.7159 \tanh(2x/3). This was sometimes called "LeCun's tanh". It was designed so that it maps the interval [-1, +1] to itself, thus ensuring that the overall gain is around 1 in "normal operating conditions", and that |f''(x)| is at maximum when x = -1, +1, which improves convergence at the end of training.{{cite book

| last1=LeCun | first1=Y. | chapter=Generalization and network design strategies | chapter-url=https://masters.donntu.ru/2012/fknt/umiarov/library/lecun.pdf | editor-last1=Pfeifer | editor-first1=R. | editor-last2=Schreter | editor-first2=Z. | editor-last3=Fogelman | editor-first3=F. | editor-last4=Steels | editor-first4=L. | title=Connectionism in Perspective: Proceedings of the International Conference Connectionism in Perspective, University of Zurich, 10–13 October 1988 | location=Amsterdam | publisher=Elsevier | date=1989

}}{{Citation |last1=LeCun |first1=Yann |title=Efficient BackProp |date=1998 |work=Neural Networks: Tricks of the Trade |pages=9–50 |editor-last=Orr |editor-first=Genevieve B. |url=https://link.springer.com/chapter/10.1007/3-540-49430-8_2 |access-date=2024-10-05 |place=Berlin, Heidelberg |publisher=Springer |language=en |doi=10.1007/3-540-49430-8_2 |isbn=978-3-540-49430-0 |last2=Bottou |first2=Leon |last3=Orr |first3=Genevieve B. |last4=Müller |first4=Klaus -Robert |editor2-last=Müller |editor2-first=Klaus-Robert|url-access=subscription }}

In self-normalizing neural networks, the SELU activation function \mathrm{SELU}(x) = \lambda \begin{cases} x & \text{if } x > 0 \\ \alpha e^x - \alpha & \text{if } x \le 0 \end{cases} with parameters \lambda \approx 1.0507, \alpha \approx 1.6733 makes it such that the mean and variance of the output of each layer has (0, 1) as an attracting fixed-point. This makes initialization less important, though they recommend initializing weights randomly with variance 1/n_{l-1}.{{Cite journal |last1=Klambauer |first1=Günter |last2=Unterthiner |first2=Thomas |last3=Mayr |first3=Andreas |last4=Hochreiter |first4=Sepp |date=2017 |title=Self-Normalizing Neural Networks |url=https://proceedings.neurips.cc/paper_files/paper/2017/hash/5d44ee6f2c3f71b73125876103c8f6c4-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=30}}

History

Random weight initialization was used since Frank Rosenblatt's perceptrons. An early work that described weight initialization specifically was (LeCun et al., 1998).

Before the 2010s era of deep learning, it was common to initialize models by "generative pre-training" using an unsupervised learning algorithm that is not backpropagation, as it was difficult to directly train deep neural networks by backpropagation.{{Cite journal |last1=Bengio |first1=Y. |year=2009 |title=Learning Deep Architectures for AI |url=http://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf |journal=Foundations and Trends in Machine Learning |volume=2 |pages=1–127 |citeseerx=10.1.1.701.9550 |doi=10.1561/2200000006}}{{Cite journal |last1=Erhan |first1=Dumitru |last2=Courville |first2=Aaron |last3=Bengio |first3=Yoshua |last4=Vincent |first4=Pascal |date=2010-03-31 |title=Why Does Unsupervised Pre-training Help Deep Learning? |url=https://proceedings.mlr.press/v9/erhan10a.html |journal=Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics |language=en |publisher=JMLR Workshop and Conference Proceedings |pages=201–208}} For example, a deep belief network was trained by using contrastive divergence layer by layer, starting from the bottom.{{Cite journal |last1=Bengio |first1=Yoshua |last2=Lamblin |first2=Pascal |last3=Popovici |first3=Dan |last4=Larochelle |first4=Hugo |date=2006 |title=Greedy Layer-Wise Training of Deep Networks |url=https://proceedings.neurips.cc/paper/2006/hash/5da713a690c067105aeb2fae32403405-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=MIT Press |volume=19}}

(Martens, 2010) proposed Hessian-free Optimization, a quasi-Newton method to directly train deep networks. The work generated considerable excitement that initializing networks without pre-training phase was possible.{{Cite journal |last1=Glorot |first1=Xavier |last2=Bordes |first2=Antoine |last3=Bengio |first3=Yoshua |date=2011-06-14 |title=Deep Sparse Rectifier Neural Networks |url=https://proceedings.mlr.press/v15/glorot11a |journal=Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics |language=en |publisher=JMLR Workshop and Conference Proceedings |pages=315–323}} However, a 2013 paper demonstrated that with well-chosen hyperparameters, momentum gradient descent with weight initialization was sufficient for training neural networks, without needing either quasi-Newton method or generative pre-training, a combination that is still in use as of 2024.{{Cite journal |last1=Sutskever |first1=Ilya |last2=Martens |first2=James |last3=Dahl |first3=George |last4=Hinton |first4=Geoffrey |date=2013-05-26 |title=On the importance of initialization and momentum in deep learning |url=https://www.cs.utoronto.ca/~ilya/pubs/2013/1051_2.pdf |journal=Proceedings of the 30th International Conference on Machine Learning |language=en |publisher=PMLR |pages=1139–1147}}

Since then, the impact of initialization on tuning the variance has become less important, with methods developed to automatically tune variance, like batch normalization tuning the variance of the forward pass,{{Cite journal |last1=Bjorck |first1=Nils |last2=Gomes |first2=Carla P |last3=Selman |first3=Bart |last4=Weinberger |first4=Kilian Q |date=2018 |title=Understanding Batch Normalization |url=https://proceedings.neurips.cc/paper/2018/hash/36072923bfc3cf47745d704feb489480-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=31|arxiv=1806.02375 }} and momentum-based optimizers tuning the variance of the backward pass.{{Cite journal |last1=Balles |first1=Lukas |last2=Hennig |first2=Philipp |date=2018-07-03 |title=Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients |url=https://proceedings.mlr.press/v80/balles18a |journal=Proceedings of the 35th International Conference on Machine Learning |language=en |publisher=PMLR |pages=404–413|arxiv=1705.07774 }}

There is a tension between using careful weight initialization to decrease the need for normalization, and using normalization to decrease the need for careful weight initialization, with each approach having its tradeoffs. For example, batch normalization causes training examples in the minibatch to become dependent, an undesirable trait, while weight initialization is architecture-dependent.{{cite arXiv | eprint=2102.06171 | last1=Brock | first1=Andrew | last2=De | first2=Soham | last3=Smith | first3=Samuel L. | last4=Simonyan | first4=Karen | title=High-Performance Large-Scale Image Recognition Without Normalization | date=2021 | class=cs.CV }}

See also

References

{{Reflist|30em}}

Further reading

  • {{Cite book |last1=Goodfellow |first1=Ian |title=Deep learning |last2=Bengio |first2=Yoshua |last3=Courville |first3=Aaron |date=2016 |publisher=The MIT press |isbn=978-0-262-03561-3 |series=Adaptive computation and machine learning |location=Cambridge, Mass |chapter=8.4 Parameter Initialization Strategies |chapter-url=https://www.deeplearningbook.org/contents/optimization.html}}
  • {{cite journal |last1=Narkhede |first1=Meenal V. |last2=Bartakke |first2=Prashant P. |last3=Sutaone |first3=Mukul S. |date=June 28, 2021 |title=A review on weight initialization strategies for neural networks |journal=Artificial Intelligence Review |publisher=Springer Science and Business Media LLC |volume=55 |issue=1 |pages=291–322 |doi=10.1007/s10462-021-10033-z |issn=0269-2821}}

{{Artificial intelligence navbox}}

Category:Machine learning

Category:Artificial neural networks

Category:Deep learning