Stochastic optimization
{{Short description|Optimization method}}
{{about|iterative methods|the modeling (and optimization) of decisions under uncertainty|stochastic programming|the context of control theory|stochastic control}}
Stochastic optimization (SO) are optimization methods that generate and use random variables. For stochastic optimization problems, the objective functions or constraints are random. Stochastic optimization also include methods with random iterates. Some hybrid methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.
{{Cite book
| author = Spall, J. C.
| title = Introduction to Stochastic Search and Optimization
| year = 2003
| publisher = Wiley
| url = http://www.jhuapl.edu/ISSO
| isbn = 978-0-471-33052-3
}}
Stochastic optimization methods generalize deterministic methods for deterministic problems.
Methods for stochastic functions
Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system,
{{cite journal
| author = Fu, M. C.
| title = Optimization for Simulation: Theory vs. Practice
| journal = INFORMS Journal on Computing
| year = 2002
| volume = 14
| pages = 192–227
| doi = 10.1287/ijoc.14.3.192.113
| issue = 3
}}M.C. Campi and S. Garatti. The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs. SIAM J. on Optimization, 19, no.3: 1211–1230, 2008.[http://epubs.siam.org/siopt/resource/1/sjope8/v19/i3/p1211_s1] and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include:
- stochastic approximation (SA), by Robbins and Monro (1951)
{{cite journal
| author = Robbins, H.
|author2=Monro, S.
| title = A Stochastic Approximation Method
| journal = Annals of Mathematical Statistics
| year = 1951
| volume = 22
| pages = 400–407
| doi = 10.1214/aoms/1177729586
| issue = 3
| doi-access = free
}}
| author = J. Kiefer
| author-link = Jack Kiefer (mathematician)
| author2 = J. Wolfowitz
| author2-link = Jacob Wolfowitz
| title = Stochastic Estimation of the Maximum of a Regression Function
| journal = Annals of Mathematical Statistics
| year = 1952
| volume = 23
| pages = 462–466
| doi = 10.1214/aoms/1177729392
| issue = 3
| doi-access = free
}}
- simultaneous perturbation SA by Spall (1992)
{{cite journal
| author = Spall, J. C.
| title = Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation
| journal = IEEE Transactions on Automatic Control
| year = 1992
| volume = 37
| pages = 332–341
| url = http://www.jhuapl.edu/SPSA
| doi = 10.1109/9.119632
| issue = 3
| citeseerx = 10.1.1.19.4562
}}
Randomized search methods
{{See also|Metaheuristic}}
On the other hand, even when the data set consists of precise measurements, some methods introduce randomness into the search-process to accelerate progress.Holger H. Hoos and Thomas Stützle, [http://www.sls-book.net/ Stochastic Local Search: Foundations and Applications], Morgan Kaufmann / Elsevier, 2004. Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics.
{{cite journal
| author = M. de Carvalho
| title = Confidence intervals for the minimum of a function using extreme value statistics
| journal = International Journal of Mathematical Modelling and Numerical Optimisation
| volume = 2
| year = 2011
| issue = 3
| pages = 288–296
| doi = 10.1504/IJMMNO.2011.040793
| url = https://www.maths.ed.ac.uk/~mdecarv/papers/decarvalho2011.pdf
{{cite journal
| author = M. de Carvalho
| title = A generalization of the Solis-Wets method
| journal = Journal of Statistical Planning and Inference
| volume = 142
| year = 2012
| issue = 3
| pages = 633‒644
| doi = 10.1016/j.jspi.2011.08.016
| url = https://www.maths.ed.ac.uk/~mdecarv/papers/decarvalho2012c.pdf
}}
Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly across many data sets, for many sorts of problems. Stochastic optimization methods of this kind include:
- simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983)
{{cite journal
| author = S. Kirkpatrick |author2=C. D. Gelatt |author3=M. P. Vecchi
| title = Optimization by Simulated Annealing
| journal = Science
| year = 1983
| volume = 220
| pages = 671–680
| url = http://citeseer.ist.psu.edu/kirkpatrick83optimization.html
| doi = 10.1126/science.220.4598.671
| pmid = 17813860
| issue = 4598
|bibcode = 1983Sci...220..671K |citeseerx=10.1.1.123.7607 |s2cid=205939 }}
- quantum annealing
- Probability Collectives by D.H. Wolpert, S.R. Bieniawski and D.G. Rajnarayan (2011)
{{cite web
| author = D.H. Wolpert |author2=S.R. Bieniawski |author3=D.G. Rajnarayan
| title = Probability Collectives in Optimization
| year = 2011
|website=Santa Fe Institute
| url = http://www.santafe.edu/research/working-papers/abstract/f752fdb9c2b41e4e04947d7531421d61/
}}
- reactive search optimization (RSO) by Roberto Battiti, G. Tecchiolli (1994),
{{cite journal
| last =Battiti
| first =Roberto
|author2=Gianpietro Tecchiolli
| year =1994
| title =The reactive tabu search
| journal =ORSA Journal on Computing
| volume =6
| issue =2
| pages =126–140
| doi =10.1287/ijoc.6.2.126
| url =http://rtm.science.unitn.it/~battiti/archive/TheReactiveTabuSearch.PDF
}} recently reviewed in the reference book
{{cite book
|title=Reactive Search and Intelligent Optimization
|last=Battiti
|first=Roberto
|author2=Mauro Brunato |author3=Franco Mascia
|year=2008
|publisher=Springer Verlag
|isbn=978-0-387-09623-0
}}
- cross-entropy method by Rubinstein and Kroese (2004)
{{Cite book
| author = Rubinstein, R. Y.
| author-link = Reuven Rubinstein
| author2 = Kroese, D. P.
| author2-link = Dirk Kroese
| title = The Cross-Entropy Method
| year = 2004
| publisher = Springer-Verlag
| isbn = 978-0-387-21240-1
}}
- random search by Anatoly Zhigljavsky (1991)
{{Cite book
| author = Zhigljavsky, A. A.
| title = Theory of Global Random Search
| year = 1991
| publisher = Kluwer Academic
| isbn = 978-0-7923-1122-5
}}
- Informational search {{cite journal
|title=A Group-Testing Algorithm with Online Informational Learning
|author= Kagan E. |author2=Ben-Gal I.
|journal=IIE Transactions |volume=46 |issue=2 |pages=164–184
|year=2014
|doi=10.1080/0740817X.2013.803639
|s2cid= 18588494 }}
{{cite journal
| author = W. Wenzel
|author2=K. Hamacher
| title = Stochastic tunneling approach for global optimization of complex potential energy landscapes
| journal = Phys. Rev. Lett.
| volume = 82
| year = 1999
| pages = 3003
| doi = 10.1103/PhysRevLett.82.3003
| bibcode=1999PhRvL..82.3003W
| issue = 15
|arxiv = physics/9903008 |s2cid=5113626
}}
- parallel tempering a.k.a. replica exchange
{{cite journal
| author = E. Marinari
|author2=G. Parisi
| title = Simulated tempering: A new monte carlo scheme
| journal = Europhys. Lett.
| volume = 19
| year = 1992
| pages = 451–458
| doi = 10.1209/0295-5075/19/6/002
| issue = 6
|arxiv = hep-lat/9205018 |bibcode = 1992EL.....19..451M |s2cid=12321327
}}
- stochastic hill climbing
- swarm algorithms
- evolutionary algorithms
- genetic algorithms by Holland (1975)
{{Cite book
|author = Goldberg, D. E.
|title = Genetic Algorithms in Search, Optimization, and Machine Learning
|year = 1989
|publisher = Addison-Wesley
|url = http://www-illigal.ge.uiuc.edu
|isbn = 978-0-201-15767-3
|archive-url = https://web.archive.org/web/20060719133933/http://www-illigal.ge.uiuc.edu/
|archive-date = 2006-07-19
}}
- evolution strategies
- cascade object optimization & modification algorithm (2016)
{{cite journal
| author = Tavridovich, S. A.
| title = COOMA: an object-oriented stochastic optimization algorithm
| journal = International Journal of Advanced Studies
| volume = 7
| year = 2017
| pages = 26–47
| issue = 2
| url = http://journal-s.org/index.php/ijas/article/view/10121/pdf
| doi=10.12731/2227-930x-2017-2-26-47
| doi-access = free
}}
In contrast, some authors have argued that randomization can only improve a deterministic algorithm if the deterministic algorithm was poorly designed in the first place.{{cite web | url=http://lesswrong.com/lw/vp/worse_than_random/ | title=Worse Than Random - LessWrong | last1=Yudkowsky | first1=Eliezer | date=11 November 2008 }}
{{cite journal
| author = Glover, F.
| year = 2007
| title = Tabu search—uncharted domains
| journal = Annals of Operations Research
| volume = 149
| pages = 89–98
| doi=10.1007/s10479-006-0113-9
| citeseerx = 10.1.1.417.8223
| s2cid = 6854578
}} argues that reliance on random elements may prevent the development of more intelligent and better deterministic components. The way in which results of stochastic optimization algorithms are usually presented (e.g., presenting only the average, or even the best, out of N runs without any mention of the spread), may also result in a positive bias towards randomness.
See also
References
{{Reflist|30em}}
Further reading
- Michalewicz, Z. and Fogel, D. B. (2000), [https://books.google.com/books?id=MpKqCAAAQBAJ&q=%22stochastic%22 How to Solve It: Modern Heuristics], Springer-Verlag, New York.