hidden layer

{{Short description|Type of layer in artificial neural networks}}

File:Example of a deep neural network.png

In artificial neural networks, a hidden layer is a layer of artificial neurons that is neither an input layer nor an output layer. The simplest examples appear in multilayer perceptrons (MLP), as illustrated in the diagram.{{Cite book |last=Zhang |first=Aston |url= |title=Dive into deep learning |last2=Lipton |first2=Zachary |last3=Li |first3=Mu |last4=Smola |first4=Alexander J. |date=2024 |publisher=Cambridge University Press |isbn=978-1-009-38943-3 |location=Cambridge New York Port Melbourne New Delhi Singapore |chapter=5.1. Multilayer Perceptrons |chapter-url=https://d2l.ai/chapter_multilayer-perceptrons/mlp.html}}

An MLP without any hidden layer is essentially just a linear model. With hidden layers and activation functions, however, nonlinearity is introduced into the model.

In typical machine learning practice, the weights and biases are initialized, then iteratively updated during training via backpropagation.

References