random subspace method

{{Short description|Method in machine learning}}

In machine learning the random subspace method,{{cite journal |first=Tin Kam |last=Ho |title=The Random Subspace Method for Constructing Decision Forests |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |year=1998 |volume=20 |issue=8 |pages=832–844 |doi=10.1109/34.709601 |s2cid=206420153 |url=https://pdfs.semanticscholar.org/b41d/0fa5fdaadd47fc882d3db04277d03fb21832.pdf |archive-url=https://web.archive.org/web/20190514121243/https://pdfs.semanticscholar.org/b41d/0fa5fdaadd47fc882d3db04277d03fb21832.pdf |url-status=dead |archive-date=2019-05-14 }} also called attribute bagging{{cite journal |first=R. |last=Bryll |title=Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets |journal=Pattern Recognition | year=2003 | volume=36 | issue=6 | pages=1291–1302 | doi = 10.1016/s0031-3203(02)00121-8 |bibcode=2003PatRe..36.1291B }} or feature bagging, is an ensemble learning method that attempts to reduce the correlation between estimators in an ensemble by training them on random samples of features instead of the entire feature set.

Motivation

In ensemble learning one tries to combine the models produced by several learners into an ensemble that performs better than the original learners. One way of combining learners is bootstrap aggregating or bagging, which shows each learner a randomly sampled subset of the training points so that the learners will produce different models that can be sensibly averaged.{{efn|If each learner follows the same, deterministic, algorithm, the models produced are necessarily all the same.}} In bagging, one samples training points with replacement from the full training set.

The random subspace method is similar to bagging except that the features ("attributes", "predictors", "independent variables") are randomly sampled, with replacement, for each learner. Informally, this causes individual learners to not over-focus on features that appear highly predictive/descriptive in the training set, but fail to be as predictive for points outside that set. For this reason, random subspaces are an attractive choice for high-dimensional problems where the number of features is much larger than the number of training points, such as learning from fMRI data{{cite journal|last=Kuncheva|first=Ludmila|author-link=Ludmila Kuncheva|year=2010|title=Random Subspace Ensembles for fMRI Classification|url=http://pages.bangor.ac.uk/~mas00a/papers/lkjrcpdlsjtmi10.pdf|journal=IEEE Transactions on Medical Imaging|volume=29|issue=2|pages=531–542|doi=10.1109/TMI.2009.2037756|pmid=20129853 |display-authors=etal|citeseerx=10.1.1.157.1178}} or gene expression data.{{cite journal | last1 = Bertoni | first1 = Alberto | last2 = Folgieri | first2 = Raffaella | last3 = Valentini | first3 = Giorgio | year = 2005 | title = Bio-molecular cancer prediction with random subspace ensembles of support vector machines | url = https://air.unimi.it/bitstream/2434/9370/4/rs-nc.pdf| journal = Neurocomputing | volume = 63 | pages = 535–539 | doi=10.1016/j.neucom.2004.07.007| hdl = 2434/9370 | hdl-access = free }}

The random subspace method has been used for decision trees; when combined with "ordinary" bagging of decision trees, the resulting models are called random forests.{{cite conference |first=Tin Kam |last=Ho|title=Random Decision Forest|conference=Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16 August 1995|year=1995 |pages=278–282|url=http://ect.bell-labs.com/who/tkh/publications/papers/odt.pdf}} It has also been applied to linear classifiers,{{cite journal |first=Marina |last=Skurichina |title=Bagging, boosting and the random subspace method for linear classifiers |journal=Pattern Analysis and Applications |year=2002 |volume=5 |issue=2 |pages=121–135 |doi=10.1007/s100440200011}} support vector machines,{{cite journal |first=D. |last=Tao |title=Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval | journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=28 |issue=7 |pages=1088–99 | year=2006|doi=10.1109/tpami.2006.134 |pmid=16792098 |url=http://eprints.bbk.ac.uk/443/1/Binder1.pdf }} nearest neighbours{{cite book |first=Tin Kam | last=Ho | title=Advances in Pattern Recognition | chapter=Nearest neighbors in random subspaces | series=Lecture Notes in Computer Science | volume=1451 |year=1998 |pages=640–648 | doi=10.1007/BFb0033288 | isbn=978-3-540-64858-1 }}{{cite conference |first=G. | last=Tremblay |title=Optimizing Nearest Neighbour in Random Subspaces using a Multi-Objective Genetic Algorithm | conference=17th International Conference on Pattern Recognition |year=2004 |pages=208–211 | url=http://nguyendangbinh.org/Proceedings/ICPR/2004/DATA/V12_3_08.PDF| doi=10.1109/ICPR.2004.1334060 | isbn=978-0-7695-2128-2 }} and other types of classifiers. This method is also applicable to one-class classifiers.{{cite journal |first=L. |last=Nanni |title=Experimental comparison of one-class classifiers for online signature verification |journal=Neurocomputing |year=2006 |volume=69 |issue=7 |pages=869–873 |doi=10.1016/j.neucom.2005.06.007}}{{Cite book|title=Multiple Classifier Systems|last1=Cheplygina|first1=Veronika|last2=Tax|first2=David M. J.|chapter=Pruned Random Subspace Method for One-Class Classifiers |date=2011-06-15|publisher=Springer Berlin Heidelberg|isbn=9783642215568|editor-last=Sansone|editor-first=Carlo|series=Lecture Notes in Computer Science|volume=6713 |pages=96–105|language=en|doi=10.1007/978-3-642-21557-5_12|editor-last2=Kittler|editor-first2=Josef|editor-last3=Roli|editor-first3=Fabio}} The random subspace method has also been applied to the portfolio selection{{Cite web|last=Varadi|first=David|date=2013|title=Random Subspace Optimization (RSO)|url=https://cssanalytics.wordpress.com/2013/10/06/random-subspace-optimization-rso/|website=CSS Analytics}}{{Cite web|last=Gillen|first=Ben|date=2016|title=Subset Optimization for Asset Allocation|url=https://authors.library.caltech.edu/79336/|website=CaltechAUTHORS}}{{citation| last1 = Shen | first1 = Weiwei | last2 = Wang | first2 = Jun | journal = Proceedings of the AAAI Conference on Artificial Intelligence | url = http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14443/13945| date = 2017 | title = Portfolio Selection via Subset Resampling| volume = 31 | doi = 10.1609/aaai.v31i1.10728 }} {{citation| last1 = Shen | first1 = Weiwei | last2 = Wang | first2 = Bin | last3 = Pu | first3 = Jian|last4 = Wang | first4 = Jun | journal = Proceedings of the AAAI Conference on Artificial Intelligence | url = https://www.aaai.org/ojs/index.php/AAAI/article/view/3906/3784| date = 2019 | title = The Kelly Growth Optimal Portfolio with Ensemble Learning| volume = 33 | pages = 1134–1141 | doi = 10.1609/aaai.v33i01.33011134 }} problem showing its superiority to the conventional resampled portfolio essentially based on Bagging.

To tackle high-dimensional sparse problems, a framework named [https://www.jmlr.org/papers/v22/20-600.html Random Subspace Ensemble (RaSE)]{{Cite journal|last1=Tian|first1=Ye|last2=Feng|first2=Yang|date=2021|title=RaSE: Random Subspace Ensemble Classification|url=http://jmlr.org/papers/v22/20-600.html|journal=Journal of Machine Learning Research|volume=22|issue=45|pages=1–93|issn=1533-7928}} was developed. RaSE combines weak learners trained in random subspaces with a two-layer structure and iterative process.{{Cite web|last1=Tian|first1=Ye|last2=Feng|first2=Yang|date=2021|title=R Package "RaSEn": Random Subspace Ensemble Classification and Variable Screening|url=https://cran.r-project.org/web/packages/RaSEn/index.html|website=CRAN}} RaSE has been shown to enjoy appealing theoretical properties and practical performance.

Algorithm

An ensemble of models employing the random subspace method can be constructed using the following algorithm:

  1. Let the number of training points be N and the number of features in the training data be D.
  2. Let L be the number of individual models in the ensemble.
  3. For each individual model l, choose n{{sub|l}} (n{{sub|l}} < N) to be the number of input points for l. It is common to have only one value of n{{sub|l}} for all the individual models.
  4. For each individual model l, create a training set by choosing d{{sub|l}} features from D with replacement and train the model.

Now, to apply the ensemble model to an unseen point, combine the outputs of the L individual models by majority voting or by combining the posterior probabilities.

Footnotes

{{notelist}}

References

{{Reflist|30em}}

{{DEFAULTSORT:Random Subspace Method}}

Category:Classification algorithms

Category:Ensemble learning