Vanishing gradient problem#Residual networks

{{short description|Machine learning model training problem}}

{{machine learning bar}}

In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights are updated proportional to their partial derivative of the loss function.{{Cite journal|last1=Basodi|first1=Sunitha|last2=Ji|first2=Chunyan|last3=Zhang|first3=Haiping|last4=Pan|first4=Yi|date=September 2020|title=Gradient amplification: An efficient way to train deep neural networks|journal=Big Data Mining and Analytics|volume=3|issue=3|pages=198|doi=10.26599/BDMA.2020.9020004|s2cid=219792172 |issn=2096-0654|doi-access=free|arxiv=2006.10560}} As the number of forward propagation steps in a network increases, for instance due to greater network depth, the gradients of earlier weights are calculated with increasingly many multiplications. These multiplications shrink the gradient magnitude. Consequently, the gradients of earlier weights will be exponentially smaller than the gradients of later weights. This difference in gradient magnitude might introduce instability in the training process, slow it, or halt it entirely. For instance, consider the hyperbolic tangent activation function. The gradients of this function are in range {{closed-closed|-1,1}}. The product of repeated multiplication with such gradients decreases exponentially. The inverse problem, when weight gradients at earlier layers get exponentially larger, is called the exploding gradient problem.

Backpropagation allowed researchers to train supervised deep artificial neural networks from scratch, initially with little success. Hochreiter's diplom thesis of 1991 formally identified the reason for this failure in the "vanishing gradient problem",{{cite thesis |first=S. |last=Hochreiter |title=Untersuchungen zu dynamischen neuronalen Netzen |type=Diplom thesis |publisher=Institut f. Informatik, Technische Univ. Munich |year=1991 |url=http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf }}{{cite book |last1=Hochreiter |first1=S. |title=A Field Guide to Dynamical Recurrent Neural Networks |last2=Bengio |first2=Y. |last3=Frasconi |first3=P. |last4=Schmidhuber |first4=J. |publisher=IEEE Press |year=2001 |isbn=0-7803-5369-2 |editor-last=Kremer |editor-first=S. C. |chapter=Gradient flow in recurrent nets: the difficulty of learning long-term dependencies |doi=10.1109/9780470544037.ch14 |editor2-last=Kolen |editor2-first=J. F. |chapter-url=https://ieeexplore.ieee.org/document/5264952}} which not only affects many-layered feedforward networks,{{Cite journal|last1=Goh|first1=Garrett B.|last2=Hodas|first2=Nathan O.|last3=Vishnu|first3=Abhinav|date=2017-06-15|title=Deep learning for computational chemistry|journal=Journal of Computational Chemistry|language=en|volume=38|issue=16|pages=1291–1307|doi=10.1002/jcc.24764|pmid=28272810|arxiv=1701.04503|bibcode=2017arXiv170104503G|s2cid=6831636}} but also recurrent networks.{{Cite conference|conference=IEEE International Conference on Neural Networks|last1=Bengio |first1=Y. |last2=Frasconi |first2=P. |last3=Simard |first3=P. |date=1993 |title=The problem of learning long-term dependencies in recurrent networks |url=https://ieeexplore.ieee.org/document/298725 |publisher=IEEE |pages=1183–1188 |doi=10.1109/ICNN.1993.298725 |isbn=978-0-7803-0999-9}}{{cite arXiv|last1=Pascanu|first1=Razvan|last2=Mikolov|first2=Tomas|last3=Bengio|first3=Yoshua|date=2012-11-21|title=On the difficulty of training Recurrent Neural Networks|eprint=1211.5063|class=cs.LG}} The latter are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time-step of an input sequence processed by the network (the combination of unfolding and backpropagation is termed backpropagation through time).

{{toclimit|3}}

Prototypical models

This section is based on the paper On the difficulty of training Recurrent Neural Networks by Pascanu, Mikolov, and Bengio.

= Recurrent network model =

A generic recurrent network has hidden states h_1, h_2, ..., inputs u_1, u_2, ..., and outputs x_1, x_2, .... Let it be parametrized by \theta, so that the system evolves as(h_t, x_t) = F(h_{t-1}, u_t, \theta)Often, the output x_t is a function of h_t, as some x_t = G(h_t). The vanishing gradient problem already presents itself clearly when x_t = h_t, so we simplify our notation to the special case with:x_t = F(x_{t-1}, u_t, \theta) Now, take its differential:\begin{align}

dx_t &= \nabla_\theta F(x_{t-1}, u_t, \theta) d\theta + \nabla_x F(x_{t-1}, u_t, \theta) dx_{t-1} \\

&= \nabla_\theta F(x_{t-1}, u_t, \theta) d\theta + \nabla_x F(x_{t-1}, u_t, \theta)(\nabla_\theta F(x_{t-2}, u_{t-1}, \theta) d\theta + \nabla_x F(x_{t-2}, u_{t-1}, \theta) dx_{t-2}) \\

&= \cdots \\

&= \left(\nabla_\theta F(x_{t-1}, u_t, \theta) + \nabla_x F(x_{t-1}, u_t, \theta) \nabla_\theta F(x_{t-2}, u_{t-1}, \theta) + \cdots \right) d\theta

\end{align}Training the network requires us to define a loss function to be minimized. Let it be L(x_T, u_1, ..., u_T){{NoteTag|note=A more general loss function could depend on the entire sequence of outputs, as

L(x_{1},...,x_{T}, u_1, ..., u_T)=\sum _{t=1}^{T}{\mathcal {E}}(x_{t}, u_1, ..., u_t)

for which the problem is the same, just with more complex notations.|name=let it be L|content=content|text=text}}, then minimizing it by gradient descent gives{{NumBlk|:|

dL = \nabla_x L(x_T, u_1, ..., u_T)\left(\nabla_\theta F(x_{t-1}, u_t, \theta) + \nabla_x F(x_{t-1}, u_t, \theta) \nabla_\theta F(x_{t-2}, u_{t-1}, \theta) + \cdots \right) d\theta

|{{EquationRef|loss differential}}}}\Delta \theta = -\eta \cdot\left[

\nabla_x L(x_T)\left(\nabla_\theta F(x_{t-1}, u_t, \theta) + \nabla_x F(x_{t-1}, u_t, \theta) \nabla_\theta F(x_{t-2}, u_{t-1}, \theta) + \cdots \right)

\right]^T where \eta is the learning rate.

The vanishing/exploding gradient problem appears because there are repeated multiplications, of the form\nabla_x F(x_{t-1}, u_t, \theta) \nabla_x F(x_{t-2}, u_{t-1}, \theta)\nabla_x F(x_{t-3}, u_{t-2}, \theta) \cdots

== Example: recurrent network with sigmoid activation ==

For a concrete example, consider a typical recurrent network defined by

x_t = F(x_{t-1}, u_t, \theta) = W_{rec} \sigma(x_{t-1}) + W_{in} u_t + b where \theta = (W_{rec}, W_{in}) is the network parameter, \sigma is the sigmoid activation function{{NoteTag|note=Any activation function works, as long as it is differentiable with bounded derivative.|name=sigmoid activation function|content=content|text=text}}, applied to each vector coordinate separately, and b is the bias vector.

Then, \nabla_x F(x_{t-1}, u_t, \theta) = W_{rec} \mathop{diag}(\sigma'(x_{t-1}))

, and so \begin{align}

\nabla_x F(x_{t-1}, u_t, \theta) & \nabla_x F(x_{t-2}, u_{t-1}, \theta)\cdots \nabla_x F(x_{t-k}, u_{t-k+1}, \theta) \\

= W_{rec} \mathop{diag}(\sigma'(x_{t-1})) & W_{rec} \mathop{diag}(\sigma'(x_{t-2})) \cdots W_{rec} \mathop{diag}(\sigma'(x_{t-k}))

\end{align} Since |\sigma'|\leq 1 , the operator norm of the above multiplication is bounded above by \|W_{rec}\|^k . So if the spectral radius of W_{rec} is \gamma<1 , then at large k , the above multiplication has operator norm bounded above by \gamma^k \to 0 . This is the prototypical vanishing gradient problem.

The effect of a vanishing gradient is that the network cannot learn long-range effects. Recall Equation ({{EquationNote|loss differential}}):\nabla_\theta L = \nabla_x L(x_T, u_1, ..., u_T)\left(\nabla_\theta F(x_{t-1}, u_t, \theta) + \nabla_x F(x_{t-1}, u_t, \theta) \nabla_\theta F(x_{t-2}, u_{t-1}, \theta) + \cdots \right) The components of \nabla_\theta F(x, u, \theta) are just components of \sigma(x) and u , so if u_t, u_{t-1}, ... are bounded, then \|\nabla _{\theta }F(x_{t-k-1},u_{t-k},\theta)\| is also bounded by some M>0 , and so the terms in \nabla_\theta L decay as M\gamma^k . This means that, effectively, \nabla_\theta L is affected only by the first O(\gamma^{-1}) terms in the sum.

If \gamma\geq 1 , the above analysis does not quite work.{{NoteTag|name=not quite work|note=Consider W_{rec} =

\begin{bmatrix}

0 & 2\\

\epsilon & 0

\end{bmatrix}

and D =

\begin{bmatrix}

c & 0\\

0 & c

\end{bmatrix}

, with \epsilon > \frac 1 2 and c\in (0, 1). Then W_{rec} has spectral radius \sqrt{2\epsilon}> 1, and (W_{rec}D)^{2N} = (2\epsilon \cdot c^2)^N I_{2\times 2}, which might go to infinity or zero depending on choice of c.}} For the prototypical exploding gradient problem, the next model is clearer.

= Dynamical systems model =

File:One-neuron recurrent network bifurcation diagram.png of the one-neuron recurrent network. Horizontal axis is b, and vertical axis is x. The black curve is the set of stable and unstable equilibria.

Notice that the system exhibits hysteresis, and can be used as a one-bit memory.

]]

Following (Doya, 1993),{{Cite book |last=Doya |first=K. |title=[Proceedings] 1992 IEEE International Symposium on Circuits and Systems |chapter=Bifurcations in the learning of recurrent neural networks |chapter-url=http://dx.doi.org/10.1109/iscas.1992.230622 |year=1992 |volume=6 |pages=2777–2780 |publisher=IEEE |doi=10.1109/iscas.1992.230622|isbn=0-7803-0593-0 |s2cid=15069221 }} consider this one-neuron recurrent network with sigmoid activation:x_{t+1} = (1-\epsilon) x_t + \epsilon \sigma(wx_t + b) + \epsilon w' u_tAt the small \epsilon limit, the dynamics of the network becomes\frac{dx}{dt} = -x(t) + \sigma(wx(t)+b) + w' u(t) Consider first the autonomous case, with u=0 . Set w=5.0 , and vary b in [-3, -2] . As b decreases, the system has 1 stable point, then has 2 stable points and 1 unstable point, and finally has 1 stable point again. Explicitly, the stable points are (x, b) = \left(x, \ln\left(\frac{x}{1-x}\right)-5x \right).

Now consider \frac{\Delta x(T)}{\Delta x(0)} and \frac{\Delta x(T)}{\Delta b} , where T is large enough that the system has settled into one of the stable points.

If (x(0), b) puts the system very close to an unstable point, then a tiny variation in x(0) or b would make x(T) move from one stable point to the other. This makes \frac{\Delta x(T)}{\Delta x(0)} and \frac{\Delta x(T)}{\Delta b} both very large, a case of the exploding gradient.

If (x(0), b) puts the system far from an unstable point, then a small variation in x(0) would have no effect on x(T) , making \frac{\Delta x(T)}{\Delta x(0)}= 0 , a case of the vanishing gradient.

Note that in this case, \frac{\Delta x(T)}{\Delta b}\approx \frac{\partial x(T)}{\partial b} = \left(\frac{1}{x(T)(1-x(T))}-5\right)^{-1} neither decays to zero nor blows up to infinity. Indeed, it's the only well-behaved gradient, which explains why early researches focused on learning or designing recurrent networks systems that could perform long-ranged computations (such as outputting the first input it sees at the very end of an episode) by shaping its stable attractors.{{Cite journal |last1=Bengio |first1=Y. |last2=Simard |first2=P. |last3=Frasconi |first3=P. |date=March 1994 |title=Learning long-term dependencies with gradient descent is difficult |url=https://ieeexplore.ieee.org/document/279181/;jsessionid=SMWnZdZeKnZotfiKpw4WmPN3aVjDWTzXEGlc8wIe2ZPryNFwofRz!1804793449 |journal=IEEE Transactions on Neural Networks |volume=5 |issue=2 |pages=157–166 |doi=10.1109/72.279181 |pmid=18267787 |s2cid=206457500 |issn=1941-0093}}

For the general case, the intuition still holds ( Figures 3, 4, and 5).

= Geometric model =

Continue using the above one-neuron network, fixing w = 5, x(0) = 0.5, u(t) = 0, and consider a loss function defined by L(x(T)) = (0.855 - x(T))^2. This produces a rather pathological loss landscape: as b approach -2.5 from above, the loss approaches zero, but as soon as b crosses -2.5, the attractor basin changes, and loss jumps to 0.50.{{NoteTag|note=This is because at b = -2.5, the two stable attractors are x=0.145, 0.855, and the unstable attractor is x=0.5.|name=attractor}}

Consequently, attempting to train b by gradient descent would "hit a wall in the loss landscape", and cause exploding gradient. A slightly more complex situation is plotted in, Figures 6.

Solutions

{{Multiple issues|section=y|

{{more science citations needed | section | date= December 2017}}

{{unreliable sources | section | date= December 2017}}

}}

To overcome this problem, several methods were proposed.

= RNN =

For recurrent neural networks, the long short-term memory (LSTM) network was designed to solve the problem (Hochreiter & Schmidhuber, 1997).{{cite journal |last1=Hochreiter |first1=Sepp |author-link=Sepp Hochreiter |last2=Schmidhuber |first2=Jürgen |author-link2=Jürgen Schmidhuber |year=1997 |title=Long Short-Term Memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735 |pmid=9377276 |s2cid=1915014}}

For the exploding gradient problem, (Pascanu et al, 2012) recommended gradient clipping, meaning dividing the gradient vector g by \|g\|/g_{max} if \|g\| > g_{max}. This restricts the gradient vectors within a ball of radius g_{max}.

= Batch normalization =

Batch normalization is a standard method for solving both the exploding and the vanishing gradient problems.{{Cite journal |last1=Ioffe |first1=Sergey |last2=Szegedy |first2=Christian |date=2015-06-01 |title=Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift |url=https://proceedings.mlr.press/v37/ioffe15.html |journal=International Conference on Machine Learning |language=en |publisher=PMLR |pages=448–456|arxiv=1502.03167 }}{{Cite journal |last1=Santurkar |first1=Shibani |last2=Tsipras |first2=Dimitris |last3=Ilyas |first3=Andrew |last4=Madry |first4=Aleksander |date=2018 |title=How Does Batch Normalization Help Optimization? |url=https://proceedings.neurips.cc/paper/2018/hash/905056c1ac1dad141560467e0a99e1cf-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=31}}

= Multi-level hierarchy =

In multi-level hierarchy of networks (Schmidhuber, 1992), pre-trained one level at a time through unsupervised learning, fine-tuned through backpropagation.J. Schmidhuber., "Learning complex, extended sequences using the principle of history compression," Neural Computation, 4, pp. 234–242, 1992. Here each level learns a compressed representation of the observations that is fed to the next level.

= Deep belief network =

Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al. (2006) involves learning the distribution of a high-level representation using successive layers of binary or real-valued latent variables. It uses a restricted Boltzmann machine to model each new layer of higher level features. Each new layer guarantees an increase on the lower-bound of the log likelihood of the data, thus improving the model, if trained properly. Once sufficiently many layers have been learned the deep architecture may be used as a generative model by reproducing the data when sampling down the model (an "ancestral pass") from the top level feature activations.{{cite journal|last2=Osindero|first2=S.|last3=Teh|first3=Y.|year=2006|title=A fast learning algorithm for deep belief nets|url=http://www.cs.toronto.edu/~hinton/absps/fastnc.pdf|journal=Neural Computation|volume=18|issue=7|pages=1527–1554|doi=10.1162/neco.2006.18.7.1527|pmid=16764513|last1=Hinton|first1=G. E.|author-link1=Geoffrey Hinton|citeseerx=10.1.1.76.1541|s2cid=2309950}} Hinton reports that his models are effective feature extractors over high-dimensional, structured data.{{Cite journal|year=2009|title=Deep belief networks|journal=Scholarpedia|volume=4|issue=5|pages=5947|doi=10.4249/scholarpedia.5947|last1=Hinton|first1=G.|bibcode=2009SchpJ...4.5947H|doi-access=free}}

=Faster hardware=

Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this "is basically what is winning many of the image recognition competitions now", but that it "does not really overcome the problem in a fundamental way"{{cite journal |last=Schmidhuber |first=Jürgen |title=Deep learning in neural networks: An overview |journal=Neural Networks |volume=61 |year=2015 |pages=85–117 |arxiv=1404.7828 |doi=10.1016/j.neunet.2014.09.003|pmid=25462637 |s2cid=11715509 }} since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs.

= Residual connection =

Residual connections, or skip connections, refers to the architectural motif of {{awrap|x \mapsto f(x) + x,}} where f is an arbitrary neural network module. This gives the gradient of \nabla f + I, where the identity matrix do not suffer from the vanishing or exploding gradient. During backpropagation, part of the gradient flows through the residual connections.{{Cite conference |last1=He |first1=Kaiming |last2=Zhang |first2=Xiangyu |last3=Ren |first3=Shaoqing |last4=Sun |first4=Jian |date=2016 |title=Deep Residual Learning for Image Recognition |url=https://ieeexplore.ieee.org/document/7780459 |location=Las Vegas, NV, USA |publisher=IEEE |pages=770–778 |arxiv=1512.03385 |doi=10.1109/CVPR.2016.90 |isbn=978-1-4673-8851-1 |journal=2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}}

Concretely, let the neural network (without residual connections) be f_n \circ f_{n-1} \circ\cdots\circ f_1, then with residual connections, the gradient of output with respect to the activations at layer l is I + \nabla f_{l+1} + \nabla f_{l+2}\nabla f_{l+1} + \cdots. The gradient thus does not vanish in arbitrarily deep networks.

Feedforward networks with residual connections can be regarded as an ensemble of relatively shallow nets. In this perspective, they resolve the vanishing gradient problem by being equivalent to ensembles of many shallow networks, for which there is no vanishing gradient problem.{{cite arXiv |eprint=1605.06431 |class=cs.CV |first1=Andreas |last1=Veit |first2=Michael |last2=Wilber |title=Residual Networks Behave Like Ensembles of Relatively Shallow Networks |date=2016-05-20 |last3=Belongie |first3=Serge}}

= Other activation functions =

Rectifiers such as ReLU suffer less from the vanishing gradient problem, because they only saturate in one direction.{{Cite journal|last1=Glorot|first1=Xavier|last2=Bordes|first2=Antoine|last3=Bengio|first3=Yoshua|date=2011-06-14|title=Deep Sparse Rectifier Neural Networks|url=http://proceedings.mlr.press/v15/glorot11a.html|journal=PMLR|pages=315–323|language=en}}

= Weight initialization =

Weight initialization is another approach that has been proposed to reduce the vanishing gradient problem in deep networks.

Kumar suggested that the distribution of initial weights should vary according to activation function used and proposed to initialize the weights in networks with the logistic activation function using a Gaussian distribution with a zero mean and a standard deviation of 3.6/sqrt(N), where N is the number of neurons in a layer.Kumar, Siddharth Krishna. "On weight initialization in deep neural networks." [https://arxiv.org/abs/1704.08863 arXiv preprint arXiv:1704.08863] (2017).

Recently, Yilmaz and Poli{{Cite journal |last1=Yilmaz |first1=Ahmet |last2=Poli |first2=Riccardo |date=2022-09-01 |title=Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean |url=https://www.sciencedirect.com/science/article/pii/S0893608022002040 |journal=Neural Networks |language=en |volume=153 |pages=87–103 |doi=10.1016/j.neunet.2022.05.030 |pmid=35714424 |s2cid=249487697 |issn=0893-6080}} performed a theoretical analysis on how gradients are affected by the mean of the initial weights in deep neural networks using the logistic activation function and found that gradients do not vanish if the mean of the initial weights is set according to the formula: max(−1,-8/N). This simple strategy allows networks with 10 or 15 hidden layers to be trained very efficiently and effectively using the standard backpropagation.

= Other =

Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid{{cite book|url=http://www.ais.uni-bonn.de/books/LNCS2766.pdf|title=Hierarchical Neural Networks for Image Interpretation.|publisher=Springer|year=2003|series=Lecture Notes in Computer Science|volume=2766|author=Sven Behnke}} to solve problems like image reconstruction and face localization.{{Citation needed|date=June 2017}}

Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem.{{Cite web|url=http://people.idsia.ch/~juergen/fundamentaldeeplearningproblem.html|title=Sepp Hochreiter's Fundamental Deep Learning Problem (1991)|website=people.idsia.ch|access-date=2017-01-07}}

See also

Notes

{{Reflist|group=note}}{{Notelist}}

References

{{reflist|30em}}

{{Use dmy dates|date=August 2019}}

Category:Artificial neural networks