Convolutional deep belief network

In computer science, a convolutional deep belief network (CDBN) is a type of deep artificial neural network composed of multiple layers of convolutional restricted Boltzmann machines stacked together.{{cite journal|last=Lee|first=Honglak|author2=Grosse, Ranganath|author3=Andrew Ng|title=Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations|url=http://people.csail.mit.edu/rgrosse/icml09-cdbn.pdf|access-date=2014-04-01|archive-date=2014-04-07|archive-url=https://web.archive.org/web/20140407092135/http://people.csail.mit.edu/rgrosse/icml09-cdbn.pdf|url-status=live}} Alternatively, it is a hierarchical generative model for deep learning, which is highly effective in image processing and object recognition, though it has been used in other domains too.{{cite web|last=Lee|first=Honglak|title=Unsupervised feature learning for audio classification using convolutional deep belief networks|url=https://ai.stanford.edu/~ang/papers/nips09-AudioConvolutionalDBN.pdf|author2=Yan Largman|author3=Peter Pham|author4=Andrew Y. Ng|access-date=2019-08-25|archive-date=2023-01-28|archive-url=https://web.archive.org/web/20230128134519/https://ai.stanford.edu/~ang/papers/nips09-AudioConvolutionalDBN.pdf|url-status=live}} The salient features of the model include the fact that it scales well to high-dimensional images and is translation-invariant.{{cite web|last=Coviello|first=Emanuele|title=Convolutional Deep Belief Networks|url=http://cseweb.ucsd.edu/~dasgupta/254-deep/emanuele.pdf|access-date=2014-04-01|archive-date=2014-04-07|archive-url=https://web.archive.org/web/20140407082615/http://cseweb.ucsd.edu/~dasgupta/254-deep/emanuele.pdf|url-status=live}}

CDBNs use the technique of probabilistic max-pooling to reduce the dimensions in higher layers in the network. Training of the network involves a pre-training stage accomplished in a greedy layer-wise manner, similar to other deep belief networks. Depending on whether the network is to be used for discrimination or generative tasks, it is then "fine tuned" or trained with either back-propagation or the up–down algorithm (contrastive–divergence), respectively.

References