Affine scaling
{{Short description|Algorithm for solving linear programming problems}}
File:Karmarkar.svg of a linear program (as opposed to the simplex algorithm, which walks the corners of the feasible region).]]
In mathematical optimization, affine scaling is an algorithm for solving linear programming problems. Specifically, it is an interior point method, discovered by Soviet mathematician I. I. Dikin in 1967 and reinvented in the U.S. in the mid-1980s.
History
Affine scaling has a history of multiple discovery. It was first published by I. I. Dikin at Energy Systems Institute of Russian Academy of Sciences (Siberian Energy Institute, USSR Academy of Sc. at that time) in the 1967 Doklady Akademii Nauk SSSR, followed by a proof of its convergence in 1974.{{r|vanderbei90}} Dikin's work went largely unnoticed until the 1984 discovery of Karmarkar's algorithm, the first practical polynomial time algorithm for linear programming. The importance and complexity of Karmarkar's method prompted mathematicians to search for a simpler version.
Several groups then independently came up with a variant of Karmarkar's algorithm. E. R. Barnes at IBM,{{cite journal |first=Earl R. |last=Barnes |title=A variation on Karmarkar's algorithm for solving linear programming problems |journal=Mathematical Programming |volume=36 |issue=2 |pages=174–182 |year=1986 |doi=10.1007/BF02592024|s2cid=27590019 }} a team led by R. J. Vanderbei at AT&T,{{cite journal |doi=10.1007/BF01840454 |first1=Robert J. |last1=Vanderbei |last2=Meketon |first2=Marc S. |last3=Freedman |first3=Barry A. |year=1986 |title=A Modification of Karmarkar's Linear Programming Algorithm |journal=Algorithmica |volume=1 |issue=1–4 |pages=395–407 |s2cid=779577 |url=https://www.princeton.edu/~rvdb/tex/myPapers/VanderbeiMeketonFreedman.pdf}} and several others replaced the projective transformations that Karmarkar used by affine ones. After a few years, it was realized that the "new" affine scaling algorithms were in fact reinventions of the decades-old results of Dikin.{{cite conference
| last1 = Vanderbei | first1 = R. J.
| last2 = Lagarias | first2 = J. C.
| contribution = I. I. Dikin's convergence result for the affine-scaling algorithm
| doi = 10.1090/conm/114/1097868
| mr = 1097868
| pages = 109–119
| publisher = American Mathematical Society | location = Providence, RI
| series = Contemporary Mathematics
| title = Mathematical developments arising from linear programming (Brunswick, ME, 1988)
| volume = 114
| year = 1990| isbn = 978-0-8218-5121-0
}}{{cite journal |first1=D. A. |last1=Bayer |first2=J. C. |last2=Lagarias |title=The nonlinear geometry of linear programming I: Affine and projective scaling trajectories |journal=Transactions of the American Mathematical Society |volume=314 |issue=2 |year=1989 |url=https://www.ams.org/journals/tran/1989-314-02/S0002-9947-1989-1005525-6/S0002-9947-1989-1005525-6.pdf |doi=10.1090/S0002-9947-1989-1005525-6 |pages=499|doi-access=free }} Of the re-discoverers, only Barnes and Vanderbei et al. managed to produce an analysis of affine scaling's convergence properties. Karmarkar, who had also came with affine scaling in this timeframe, mistakenly believed that it converged as quickly as his own algorithm.{{cite book |first=Robert J. |last=Vanderbei |title=Linear Programming: Foundations and Extensions |year=2001 |publisher=Springer Verlag |pages=333–347}}{{rp|346}}
Algorithm
Affine scaling works in two phases, the first of which finds a feasible point from which to start optimizing, while the second does the actual optimization while staying strictly inside the feasible region.
Both phases solve linear programs in equality form, viz.
:minimize {{math|c ⋅ x}}
:subject to {{math|Ax {{=}} b}}, {{math|x ≥ 0}}.
These problems are solved using an iterative method, which conceptually proceeds by plotting a trajectory of points strictly inside the feasible region of a problem, computing projected gradient descent steps in a re-scaled version of the problem, then scaling the step back to the original problem. The scaling ensures that the algorithm can continue to do large steps even when the point under consideration is close to the feasible region's boundary.{{r|lpfe}}{{rp|337}}
Formally, the iterative method at the heart of affine scaling takes as inputs {{mvar|A}}, {{mvar|b}}, {{mvar|c}}, an initial guess {{math|x0 > 0}} that is strictly feasible (i.e., {{math|Ax0 {{=}} b}}), a tolerance {{mvar|ε}} and a stepsize {{mvar|β}}. It then proceeds by iterating{{r|vanderbei90}}{{rp|111}}
- Let {{mvar|Dk}} be the diagonal matrix with {{mvar|xk}} on its diagonal.
- Compute a vector of dual variables:
- :
- Compute a vector of reduced costs, which measure the slackness of inequality constraints in the dual:
- :
- If and , the current solution {{math|xk}} is {{mvar|ε}}-optimal.
- If , the problem is unbounded.
- Update
=Initialization=
Phase I, the initialization, solves an auxiliary problem with an additional variable {{mvar|u}} and uses the result to derive an initial point for the original problem. Let {{math|x0}} be an arbitrary, strictly positive point; it need not be feasible for the original problem. The infeasibility of {{math|x0}} is measured by the vector
:.
If {{math|v {{=}} 0}}, {{math|x0}} is feasible. If it is not, phase I solves the auxiliary problem
:minimize {{math|u}}
:subject to {{math|Ax + uv {{=}} b}}, {{math|x ≥ 0}}, {{math|u ≥ 0}}.
This problem has the right form for solution by the above iterative algorithm,{{efn|The structure in the auxiliary problem permits some simplification of the formulas.{{r|lpfe}}{{rp|344}}}} and
:
is a feasible initial point for it. Solving the auxiliary problem gives
:.
If {{math|u* {{=}} 0}}, then {{math|x* ≥0}} is feasible in the original problem (though not necessarily strictly interior), while if {{math|u* > 0}}, the original problem is infeasible.{{r|lpfe}}{{rp|343}}
Analysis
While easy to state, affine scaling was found hard to analyze. Its convergence depends on the step size, {{mvar|β}}. For step sizes {{math|{{Var|β}} ≤ {{sfrac|2|3}}}}, Vanderbei's variant of affine scaling has been proven to converge, while for {{math|{{Var|β}} > 0.995}}, an example problem is known that converges to a suboptimal value.{{r|lpfe}}{{rp|342}} Other variants of the algorithm have been shown to exhibit chaotic behavior even on small problems when {{math|{{Var|β}} > {{sfrac|2|3}}}}.{{cite journal |first1=H. |last1=Bruin |first2=R.J. |last2=Fokkink |first3=G. |last3=Gu |first4=C. |last4=Roos |title=On the chaotic behavior of the primal–dual affine–scaling algorithm for linear optimization |journal=Chaos |year=2014 |doi=10.1063/1.4902900 |pmid=25554052 |url=https://www.mat.univie.ac.at/~bruin/papers/chaosopt.pdf |volume=24 |issue=4 |pages=043132|arxiv=1409.6108 |bibcode=2014Chaos..24d3132B |s2cid=21505124 }}{{cite journal |first1=Ileana |last1=Castillo |first2=Earl R. |last2=Barnes |title=Chaotic Behavior of the Affine Scaling Algorithm for Linear Programming |year=2006 |journal=SIAM J. Optim. |volume=11 |issue=3 |pages=781–795 |doi=10.1137/S1052623496314070}}
Notes
{{notelist}}
References
{{reflist|30em}}
Further reading
- {{cite journal |first1=Ilan |last1=Adler |first2=Renato D. C. |last2=Monteiro |title=Limiting behavior of the affine scaling continuous trajectories for linear programming problems |journal=Mathematical Programming |volume=50 |issue=1–3 |year=1991 |pages=29–51 |url=http://www2.isye.gatech.edu/~monteiro/publications/tech_reports/waffine.pdf |doi=10.1007/bf01594923|s2cid=15748327 }}
- {{cite journal |first=Romesh |last=Saigal |title=A simple proof of a primal affine scaling method |journal=Annals of Operations Research |volume=62 |year=1996 |pages=303–324 |url=http://deepblue.lib.umich.edu/bitstream/handle/2027.42/44263/10479_2005_Article_BF02206821.pdf |doi=10.1007/bf02206821|hdl=2027.42/44263 |s2cid=14046399 |hdl-access=free }}
- {{cite journal |first1=Paul |last1=Tseng |first2=Zhi-Quan |last2=Luo |title=On the convergence of the affine-scaling algorithm |journal=Mathematical Programming |volume=56 |issue=1–3 |year=1992 |pages=301–319 |url=http://dspace.mit.edu/bitstream/handle/1721.1/3161/P-1920-20783110.pdf |citeseerx=10.1.1.94.7852 |doi=10.1007/bf01580904|hdl=1721.1/3161 |s2cid=13714272 }}
External links
- {{cite web |title=15.093 Optimization Methods, Lecture 21: The Affine Scaling Algorithm |url=http://ocw.mit.edu/courses/sloan-school-of-management/15-093j-optimization-methods-fall-2009/lecture-notes/MIT15_093J_F09_lec21.pdf |website=MIT OpenCourseWare |year=2009}}
- {{cite web |url=http://homepages.rpi.edu/~mitchj/handouts/interior_html/interior.html |first=John |last=Mitchell |title=Interior Point Methods |date=November 2010 |publisher=Rensselaer Polytechnic Institute}}
- {{Cite web |url=http://ocw.nctu.edu.tw/course/lp992/Lecture6.pdf |title=Lecture 6: Interior point method |website=NCTU OpenCourseWare |access-date=2016-02-06 |archive-url=https://web.archive.org/web/20161011011058/http://ocw.nctu.edu.tw/course/lp992/Lecture6.pdf |archive-date=2016-10-11 |url-status=dead }}
{{Optimization algorithms|convex}}