backpropagation through time

{{short description|Technique for training recurrent neural networks}}

{{Redirect|BPTT|the running events originally known as the Bushy Park Time Trial|parkrun}}

Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers.{{Cite book|chapter-url=https://www.researchgate.net/publication/243781476|chapter=A Focused Backpropagation Algorithm for Temporal Pattern Recognition|website=ResearchGate|language=en|access-date=2017-08-21

|last=Mozer |first=M. C.

|editor1-first=Y. |editor1-last=Chauvin |editor2-first=D. |editor2-last=Rumelhart |title= Backpropagation: Theory, architectures, and applications

|publisher= Hillsdale, NJ: Lawrence Erlbaum Associates

|pages=137–169

|year=1995

}}{{cite tech report|title=The utility driven dynamic error propagation network|author1=Robinson, A. J.|author2=Fallside, F.|name-list-style=amp|institution=Cambridge University, Engineering Department|number=CUED/F-INFENG/TR.1|year=1987|url=https://www.bibsonomy.org/bibtex/269a88ecbac9a51cbf0b4be189c412820/idsia}}{{Cite journal|last=Werbos|first=Paul J.|title=Generalization of backpropagation with application to a recurrent gas market model|journal=Neural Networks|volume=1|issue=4|pages=339–356|doi=10.1016/0893-6080(88)90007-x|year=1988|url=https://zenodo.org/record/1258627}}

Algorithm

Image:Unfold through time.png

The training data for a recurrent neural network is an ordered sequence of k input-output pairs, \langle \mathbf{a}_0,\mathbf{y}_0 \rangle, \langle\mathbf{a}_1,\mathbf{y}_1 \rangle,\langle\mathbf{a}_2,\mathbf{y}_2\rangle,...,\langle\mathbf{a}_{k-1},\mathbf{y}_{k-1}\rangle. An initial value must be specified for the hidden state \mathbf{x}_0, typically chosen to be a zero vector.

BPTT begins by unfolding a recurrent neural network in time. The unfolded network contains k inputs and outputs, but every copy of the network shares the same parameters. Then, the backpropagation algorithm is used to find the gradient of the loss function with respect to all the network parameters.

Consider an example of a neural network that contains a recurrent layer f and a feedforward layer g. There are different ways to define the training cost, but the aggregated cost is always the average of the costs of each of the time steps. The cost of each time step can be computed separately. The figure above shows how the cost at time t + 3 can be computed, by unfolding the recurrent layer f for three time steps and adding the feedforward layer g. Each instance of f in the unfolded network shares the same parameters. Thus, the weight updates in each instance (f_1, f_2, f_3) are summed together.

Pseudocode

Below is pseudocode for a truncated version of BPTT, where the training data contains n input-output pairs, and the network is unfolded for k time steps:

Back_Propagation_Through_Time(a, y) // a[t] is the input at time t. y[t] is the output

Unfold the network to contain k instances of f

do until stopping criterion is met:

x := the zero-magnitude vector // x is the current context

for t from 0 to n − k do // t is time. n is the length of the training sequence

Set the network inputs to x, a[t], a[t+1], ..., a[t+k−1]

p := forward-propagate the inputs over the whole unfolded network

e := y[t+k] − p; // error = target − prediction

Back-propagate the error, e, back across the whole unfolded network

Sum the weight changes in the k instances of f together.

Update all the weights in f and g.

x := f(x, a[t]); // compute the context for the next time-step

Advantages

BPTT tends to be significantly faster for training recurrent neural networks than general-purpose optimization techniques such as evolutionary optimization.{{Cite journal|last1=Sjöberg|first1=Jonas|last2=Zhang|first2=Qinghua|last3=Ljung|first3=Lennart|last4=Benveniste|first4=Albert|last5=Delyon|first5=Bernard|last6=Glorennec|first6=Pierre-Yves|last7=Hjalmarsson|first7=Håkan|author7-link= Håkan Hjalmarsson |last8=Juditsky|first8=Anatoli|title=Nonlinear black-box modeling in system identification: a unified overview|journal=Automatica|volume=31|issue=12|pages=1691–1724|doi=10.1016/0005-1098(95)00120-8|year=1995|citeseerx=10.1.1.27.81}}

Disadvantages

BPTT has difficulty with local optima. With recurrent neural networks, local optima are a much more significant problem than with feed-forward neural networks.{{Cite book

|author = M.P. Cuéllar and M. Delgado and M.C. Pegalajar

|title=Enterprise Information Systems VII

|chapter=An Application of Non-Linear Programming to Train Recurrent Neural Networks in Time Series Prediction Problems

|publisher = Springer Netherlands

|pages = 95–102

|doi = 10.1007/978-1-4020-5347-4_11

|year=2006

|isbn=978-1-4020-5323-8

}} The recurrent feedback in such networks tends to create chaotic responses in the error surface which cause local optima to occur frequently, and in poor locations on the error surface.

{{Disputed section|BPTT Disadvantages Accuracy.|date=September 2021}}

See also

References

{{Reflist}}

{{DEFAULTSORT:Backpropagation Through Time}}

Category:Artificial neural networks