lexicographic max-min optimization

{{Short description|Optimization method}}

Lexicographic max-min optimization (also called lexmaxmin or leximin or leximax or lexicographic max-ordering optimization) is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.

As an example, consider egalitarian social planners, who want to decide on a policy such that the utility of the poorest person will be as high as possible; subject to this, they want to maximize the utility of the second-poorest person; and so on. This planner solves a lexmaxmin problem, where the objective function number i is the utility of agent number i.

Algorithms for lexmaxmin optimization (not using this name) were developed for computing the nucleolus of a cooperative game.{{Cite report |url=https://apps.dtic.mil/sti/citations/AD0661382 |title=COMPUTATION OF THE KERNELS OF SIMPLE GAMES AND THE NUCLEOLUS OF N-PERSON GAMES. |language=en}}{{Cite journal |last=Kohlberg |first=Elon |date=1972-07-01 |title=The Nucleolus as a Solution of a Minimization Problem |url=http://epubs.siam.org/doi/10.1137/0123004 |journal=SIAM Journal on Applied Mathematics |language=en |volume=23 |issue=1 |pages=34–39 |doi=10.1137/0123004 |issn=0036-1399}} An early application of lexmaxmin was presented by Melvin Dresher{{Cite book

| last1=Dresher | first1=Melvin

| title=Games of Strategy: Theory and Applications

| publisher=Prentice-Hall

| url=https://apps.dtic.mil/sti/citations/ADA473407

| via=DTIC

| language=en

| date=1961}} in his book on game theory, in the context of taking maximum advantage of the opponent's mistakes in a zero-sum game. Behringer{{Cite journal |last=Behringer |first=F. A. |date=1977-06-01 |title=Lexicographic quasiconcave multiobjective programming |url=https://doi.org/10.1007/BF01919766 |journal=Zeitschrift für Operations Research |language=en |volume=21 |issue=3 |pages=103–116 |doi=10.1007/BF01919766 |s2cid=27807594 |issn=1432-5217}} cites many other examples in game theory as well as decision theory.

Notation

A lexmaxmin problem may be written as:

\begin{align}

\operatorname{lex}

\max \min &&

f_1(x), f_2(x), \ldots, f_n(x)

\\

\text{subject to} && x\in X

\end{align}

where f_1,\ldots, f_n are the functions to maximize; x is the vector of decision variables; and X is the feasible set - the set of possible values of x .

Comparison with lexicographic optimization

Lexmaxmin optimization is closely related to lexicographic optimization. However, in lexicographic optimization, there is a fixed order on the functions, such that f_1 is the most important, f_2 is the next-most important, and so on. In contrast, in lexmaxmin, all the objectives are equally important. To present lexmaxmin as a special case of lexicographic optimization, denote by f_{[1]}(x) := \min(f_1(x),\ldots,f_n(x)) = the smallest objective value in x. Similarly, denote by f_{[2]}(x) := the second-smallest objective value in x, and so on, so that f_{[1]}(x) \leq f_{[2]}(x)\leq \cdots \leq f_{[n]}(x) . Then, the lexmaxmin optimization problem can be written as the following lexicographic maximization problem:

\begin{align}

\operatorname{lex}

\max &&

f_{[1]}(x), \ldots, f_{[n]}(x)

\\

\text{subject to} && x\in X

\end{align}

Uniqueness

In general, a lexmaxmin optimization problem may have more than one optimal solution. If x^1 and x^2 are two optimal solutions, then their ordered value vector must be the same, that is, f_{[i]}(x^1) = f_{[i]}(x^2) for all i\in[n] ,{{Cite journal |last1=Ogryczak |first1=W. |last2=Pióro |first2=M. |last3=Tomaszewski |first3=A. |date=2005 |title=Telecommunications network design and max-min optimization problem |url=http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-article-BAT3-0027-0006 |journal=Journal of Telecommunications and Information Technology |language=EN |volume=3 |issue=3 |pages=43–56 |doi=10.26636/jtit.2005.3.326 |issn=1509-4553|doi-access=free }}{{Rp|location=Thm.2}} that is, the smallest value is the same, the second-smallest value is the same, and so on. However, the unsorted value vectors may be different. For example, (1,2,3) and (2,1,3) may both be optimal solutions to the same problem.

However, if the feasible domain is a convex set, and the objectives are concave functions, then the value vectors in all optimal solutions must be the same, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions.{{Rp|location=Thm.6}}

Algorithms for continuous variables

= Saturation Algorithm for convex problems =

The Saturation Algorithm works when the feasible set is a convex set, and the objectives are concave functions. Variants of these algorithm appear in many papers. The earliest appearance is attributed to Alexander Kopelowitz by Elkind and Pasechnik.{{Cite book |last1=Elkind |first1=Edith |last2=Pasechnik |first2=Dmitrii |date=2009-01-04 |title=Computing the nucleolus of weighted voting games |url=https://epubs.siam.org/doi/10.1137/1.9781611973068.37 |language=en |publisher=Society for Industrial and Applied Mathematics |pages=327–335 |doi=10.1137/1.9781611973068.37 |hdl=10356/93815 |isbn=978-0-89871-680-1}} Other variants appear in.{{Cite web |last=Willson |first=Stephen J. |date=1995 |title=Fair Division using Linear Programming |url=https://swillson.public.iastate.edu/FairDivisionUsingLPUnpublished6.pdf |website=Iowa State University (unpublished manuscript)}}{{Rp|20–27}}{{Cite journal |last1=Potters |first1=Jos A. M. |last2=Tijs |first2=Stef H. |date=1992-02-01 |title=The Nucleolus of a Matrix Game and Other Nucleoli |url=https://pubsonline.informs.org/doi/abs/10.1287/moor.17.1.164 |journal=Mathematics of Operations Research |volume=17 |issue=1 |pages=164–174 |doi=10.1287/moor.17.1.164 |issn=0364-765X |hdl-access=free |hdl=2066/223732 |s2cid=40275405}}{{Cite journal |last=Luss |first=Hanan |date=1999-06-01 |title=On Equitable Resource Allocation Problems: A Lexicographic Minimax Approach |journal=Operations Research |volume=47 |issue=3 |pages=361–378 |doi=10.1287/opre.47.3.361 |issn=0030-364X |doi-access=free}}{{Rp|location=Alg.2}}{{Cite journal |last1=Nace |first1=Dritan |last2=Pioro |first2=Michal |date=2008 |title=Max-min fairness and its applications to routing and load-balancing in communication networks: a tutorial |url=https://ieeexplore.ieee.org/document/4738463 |journal=IEEE Communications Surveys & Tutorials |language=en |volume=10 |issue=4 |pages=5–17 |doi=10.1109/SURV.2008.080403 |issn=1553-877X |s2cid=6595144}}{{Cite journal |last1=Airiau |first1=Stéphane |last2=Aziz |first2=Haris |last3=Caragiannis |first3=Ioannis |last4=Kruger |first4=Justin |last5=Lang |first5=Jérôme |last6=Peters |first6=Dominik |date=2019-08-10 |title=Portioning using ordinal preferences: fairness and efficiency |url=https://dl.acm.org/doi/abs/10.5555/3367032.3367035 |journal=Proceedings of the 28th International Joint Conference on Artificial Intelligence |series=IJCAI'19 |location=Macao, China |publisher=AAAI Press |pages=11–17 |doi= |isbn=978-0-9992411-4-1}}{{Cite journal |last1=Bei |first1=Xiaohui |last2=Lu |first2=Xinhang |last3=Suksompong |first3=Warut |date=2022-06-28 |title=Truthful Cake Sharing |url=https://ojs.aaai.org/index.php/AAAI/article/view/20408 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |language=en |volume=36 |issue=5 |pages=4809–4817 |doi=10.1609/aaai.v36i5.20408 |issn=2374-3468 |doi-access=free |s2cid=245117491}}{{Cite journal |last1=Ogryczak |first1=Włodzimierz |date=1997-08-01 |title=On the lexicographic minimax approach to location problems |url=https://www.sciencedirect.com/science/article/abs/pii/S0377221796001543 |journal=European Journal of Operational Research |language=en |volume=100 |issue=3 |pages=566–585 |doi=10.1016/S0377-2217(96)00154-3 |issn=0377-2217}}{{Cite journal |last1=Dubois |first1=Didier |last2=Fortemps |first2=Philippe |date=1999-10-01 |title=Computing improved optimal solutions to max–min flexible constraint satisfaction problems |url=https://www.sciencedirect.com/science/article/abs/pii/S0377221798003075 |journal=European Journal of Operational Research |language=en |volume=118 |issue=1 |pages=95–126 |doi=10.1016/S0377-2217(98)00307-5 |issn=0377-2217}}{{Cite book |last=Ehrgott |first=Matthias |url=https://books.google.com/books?id=8wGyB5Sa2CUC&dq=Multicriteria+Optimization%2C+Lecture+Notes+in+Economics+and+Mathematical+Systems%2C+v&pg=PA1 |title=Multicriteria Optimization |date=2005-05-18 |publisher=Springer Science & Business Media |isbn=978-3-540-21398-7 |language=en}}

The algorithm keeps a set of objectives that are considered saturated (also called: blocking). This means that their value cannot be improved without harming lower-valued objectives. The other objectives are called free. Initially, all objectives are free. In general, the algorithm works as follows:

  • While some objective is free:
  • (P1) Solve the following single-objective problem, where

z_k

is the saturation value of objective

f_k

:

\begin{align}

\max ~~~

z

\\

\text{subject to} ~~~ &x\in X,

\\ &f_k(x) = z_k \text{ for all saturated objectives } k,

\\ &f_k(x) \geq z \text{ for all free objectives } k

\end{align}

  • If the problem is infeasible or unbounded, stop and declare that there is no solution.
  • Otherwise, let

z_{\max}

be the maximum value of the first problem.

  • Look for free objectives whose value cannot increase above

z_{\max}

without decreasing some other objective below

z_{\max}

. In any lexmaxmin solution, the value of any such objective must be exactly

z_{\max}

. Add all such objectives to the set of saturated objectives, set their saturation value to

z_{\max}

, and go back to (P1).

It remains to explain how we can find new saturated objectives in each iteration.

Method 1: interior optimizers. An interior optimizer of a linear program is an optimal solution in which the smallest possible number of constraints are tight. In other words, it is a solution in the interior of the optimal face. An interior optimizer of (P1) can be found by solving (P1) using the ellipsoid method or interior point methods.

The set of tight constraints in an interior optimizer is unique. Proof: Suppose by contradiction that there are two interior-optimizers, x1 and x2, with different sets of tight constraints. Since the feasible set is convex, the average solution x3 = (x1+x2)/2 is also an optimizer. Every constraint that is not tight in either x1 or x2, is not tight in x3. Therefore, the number of tight constraints in x3 is smaller than in x1 and x2, contradicting the definition of an interior optimizer.

Therefore, the set of tight constraints in the interior optimizer corresponds to the set of free objectives that become saturated. Using this method, the leximin solution can be computed using at most n iterations.

Method 2: iterating over all objectives. It is possible to find at least one saturated objective using the following algorithm.

  • For every free objective

f_j

:

  • (P2) Solve the following single-objective problem:

\begin{align}

\max ~~~

f_j(x)

\\

\text{subject to} ~~~ &x\in X,

\\ &f_k(x) \geq z_k \text{ for all saturated objectives } k,

\\ &f_k(x) \geq z_{\max} \text{ for all free objectives } k

\end{align}

  • If the optimal value equals

z_{\max}

, then objective j becomes saturated from now on.

  • Otherwise, the optimal value must be larger than

z_{\max}

; objective j remains free for now.

  • End for

At each step, at least one free objective must become saturated. This is because, if no objective were saturated, then the mean of all optimal solutions to (P2) would be a feasible solution in which all objective values are larger than

z_{\max}

- contradicting the optimality of solution to (P1). For example, suppose

z_{\max}=1

, objective 1 is not saturated because there is a solution with value-vector (3,1), and objective 2 is not saturated because there exists a solution with value-vector and (1,3). Then, there exists a solution with value-vector at least (2,2), but

z_{\max}

should have been at least 2.

Therefore, after at most n iterations, all variables are saturated and a leximin-optimal solution is found. In each iteration t, the algorithm solves at most n-t+1 linear programs; therefore, the run-time of the algorithm is at most

(n+2)(n+1)/2 \in O(n^2)

times the run-time of the LP solver.

In some cases, the run-time of the saturation algorithm can be improved. Instead of finding all saturated objectives, we can break out of the inner loop after finding one saturated objective; the algorithm still stops after at most n iterations, and may reduce the number of linear programs (P2) we need to solve.{{Rp|location=Alg.3}}

Furthermore, instead of looping over all objectives to find a saturated one, the algorithm can find a saturated objective using the dual problem of (P1). In some cases, the dual variables are given as a byproduct of solving (P1), for example, when the objectives and constraints are linear and the solver is the simplex algorithm. In this case, (P2) is not needed at all, and the run-time of the algorithm is at most

n

times the run-time of the solver of (P1).{{Rp|location=Alg.4}}

All these variants work only for convex problems. For non-convex problems, there might be no saturated objective, so the algorithm might not stop.

= Ordered Outcomes Algorithm for general problems =

The Ordered Outcomes Algorithm works in arbitrary domains (not necessarily convex). It was developed by Ogryczak and Śliwiński{{Cite book |last1=Ogryczak |first1=Włodzimierz |last2=Śliwiński |first2=Tomasz |title=Computational Science and Its Applications - ICCSA 2006 |chapter=On Direct Methods for Lexicographic Min-Max Optimization |date=2006 |editor-last=Gavrilova |editor-first=Marina|editor-link=Marina Gavrilova |editor2-last=Gervasi |editor2-first=Osvaldo |editor3-last=Kumar |editor3-first=Vipin |editor4-last=Tan |editor4-first=C. J. Kenneth |editor5-last=Taniar |editor5-first=David |editor6-last=Laganá |editor6-first=Antonio |editor7-last=Mun |editor7-first=Youngsong |editor8-last=Choo |editor8-first=Hyunseung |chapter-url=https://link.springer.com/chapter/10.1007/11751595_85 |series=Lecture Notes in Computer Science |volume=3982 |language=en |location=Berlin, Heidelberg |publisher=Springer |pages=802–811 |doi=10.1007/11751595_85 |isbn=978-3-540-34076-8}} and also presented in the context of telecommunication networks by Ogryczak, Pioro and Tomaszewski, and in the context of location problems by Ogryczak.{{Cite journal |last=Ogryczak |first=Włodzimierz |date=1997-08-01 |title=On the lexicographic minimax approach to location problems |url=https://www.sciencedirect.com/science/article/pii/S0377221796001543 |journal=European Journal of Operational Research |language=en |volume=100 |issue=3 |pages=566–585 |doi=10.1016/S0377-2217(96)00154-3 |issn=0377-2217}} The algorithm reduces lexmaxmin optimization to the easier problem of lexicographic optimization. Lexicographic optimization can be done with a simple sequential algorithm, which solves at most n linear programs. The reduction starts with the following presentation of lexmaxmin:

\begin{align}

(L1)

\\

\operatorname{lex}

\max &&

f_{[1]}(x), \ldots, f_{[n]}(x)

\\

\text{subject to} && x\in X

\end{align}

This problem cannot be solved as-is, because f_{[t]}(x) (the t-th smallest value in \mathbf{f}(x)) is not a simple function of x. The problem (L1) is equivalent to the following problem, where f_{[1..t]}(x) := \sum_{i=1}^t f_{[i]}(x) = the sum of the t smallest values in \mathbf{f}(x):

\begin{align}

(L2)

\\

\operatorname{lex}

\max &&

f_{[1..1]}(x),f_{[1..2]}(x), \ldots, f_{[1..n]}(x)

\\

\text{subject to} && x\in X

\end{align}

This problem can be solved iteratively using lexicographic optimization, but the number of constraints in each iteration t is C(n,t) -- the number of subsets of size t. This grows exponentially with n. It is possible to reduce the problem to a different problem, in which the number of constraints is polynomial in n.

For every t, the sum f_{[1..t]}(x) can be computed as the optimal value to the following problem, with n+1 auxiliary variables (an unbounded variable r_{t} , and non-negative variables d_{t,j} for all j in 1,...,n), and n additional constraints:{{Rp|location=Thm.8}}{{Cite journal |last1=Ogryczak |first1=Wlodzimierz |last2=Tamir |first2=Arie |date=2003-02-14 |title=Minimizing the sum of the k largest functions in linear time |url=https://www.sciencedirect.com/science/article/pii/S0020019002003708 |journal=Information Processing Letters |language=en |volume=85 |issue=3 |pages=117–122 |doi=10.1016/S0020-0190(02)00370-8 |issn=0020-0190}}

\begin{align}

(L3)

\\

\max (t\cdot r_t - \sum_{j=1}^n d_{t,j})

\\

\text{subject to} ~~~ & x\in X,

\\

&

r_t - f_j(x) \leq d_{t,j} \text{ for all } j\in[n],

\\

&

d_{t,j} \geq 0 \text{ for all } j\in[n].

\end{align}

Proof. Let us compute the values of the auxiliary variables in the optimal solution.

  • For all j, d_{t,j} should be at least as large as both 0 and r_t-f_j(x) , and subject to this, it should be minimized, since it appears in the objective with a minus sign. Therefore, d_{t,j} = \max(0, r_t-f_j(x)) . So the objective can be written as: t\cdot r_t - \sum_{j=1}^n

\max(0, r_t-f_j(x)) .

  • For any k between 0 and n, if r_t is larger than the smallest k objective values (that is, r_t \geq f_{[k]}(x) ), then the sum at the right-hand side contains exactly k positive elements: \sum_{j=1}^k

(r_t-f_{[j]}(x))

=

k\cdot r_t

-

f_{[1..k]}(x) . In that case, the objective can be written as: (t-k)\cdot r_t + f_{[1..k]}(x) . Note that (t-k)\cdot r_t is increasing with r_t when k<t, and decreasing with r_t when k>t. Therefore, the maximum value is attained when k=t, that is, r_t is larger than the smallest t objective values; in that case, the objective exactly equals f_{[1..t]}(x) , as claimed.

Therefore, the problem (L2) is equivalent to the following lexicographic maximization problem:{{Rp|location=(32)}}

\begin{align}

(L4)

\\

\operatorname{lex} \max (t\cdot r_t - \sum_{j=1}^n d_{t,j})_{t=1}^n

\\

\text{subject to} ~~~ & x\in X,

\\

&

r_t - f_j(x) \leq d_{t,j} \text{ for all } j\in[n],

\\

&

d_{t,j} \geq 0 \text{ for all } j\in[n].

\end{align}

This problem (L4) has

n^2+n

additional variables, and

n^2

additional constraints. It can be solved by every algorithm for solving lexicographic maximization, for example: the sequential algorithm using n linear programs, or the lexicographic simplex algorithm (if the objectives and constraints are linear).

== Approximate leximin solutions ==

One advantage of the Ordered Outcomes Algorithm is that it can be used even when the single-problem solver is inaccurate, and returns only approximate solutions. Specifically, if the single-problem solver approximates the optimal single-problem solution with multiplicative factor α ∈ (0,1] and additive factor ϵ ≥ 0, then the algorithm returns a solution that approximates the leximin-optimal solution with multiplicative factor α2/(1 − α + α2) and additive factor ϵ/(1 − α + α2).{{Citation |last1=Hartman |first1=Eden |title=Leximin Approximation: From Single-Objective to Multi-Objective |date=2023 |work=ECAI 2023 |pages=996–1003 |publisher=IOS Press |last2=Hassidim |first2=Avinatan |last3=Aumann |first3=Yonatan |last4=Segal-Halevi |first4=Erel|series=Frontiers in Artificial Intelligence and Applications |doi=10.3233/FAIA230371 |isbn=9781643684369 |doi-access=free |arxiv=2303.12506 }}

= Ordered Values Algorithm for general problems =

The Ordered Values Algorithm works in any domain in which the set of possible values of the objective functions is finite. It was developed by Ogryczak and Śliwiński. Let V = \{v_1,\ldots, v_r\} be the set of all values that can be returned by the functions f_1,\ldots, f_n , such that v_1 < \cdots < v_r . Given a solution x, and an integer k in {1,..,r}, define h_k(x) as the number of occurrences of the value vr in the vector f_1(x),\ldots, f_n(x) . Then, the lexmaxmin problem can be stated as the following lexicographic minimization problem:

\begin{align}

(H1)

\\

\operatorname{lex}

\min &&

h_1(x), \ldots, h_{r-1}(x)

\\

\text{subject to} && x\in X

\end{align}

since we want to have as few as possible functions attaining the smallest value; subject to this, as few as possible functions attaining the next-smallest value; and so on. Ogryczak and Śliwiński show how to transform this non-linear program into a linear program with auxiliary variables. In their computational experiments, the Ordered Values algorithm runs much faster than the Saturation algorithm and the Ordered Outcomes algorithm.

= Behringer's algorithm for quasiconcave functions =

Behringer presented a sequential algorithm for lexmaxmin optimization{{Clarify|reason=The algorithm is very complicated. Can someone explain it?|date=February 2023}} when the objectives are quasiconvex functions, and the feasible set X is a convex set.

= Weighted average =

Yager{{Cite journal |last=Yager |first=Ronald R. |date=1997-10-01 |title=On the analytic representation of the Leximin ordering and its application to flexible constraint propagation |url=https://www.sciencedirect.com/science/article/pii/S0377221796002172 |journal=European Journal of Operational Research |language=en |volume=102 |issue=1 |pages=176–192 |doi=10.1016/S0377-2217(96)00217-2 |issn=0377-2217}} presented a way to represent the leximin ordering analytically using the Ordered weighted averaging aggregation operator. He assumes that all objective values are real numbers between 0 and 1, and the smallest difference between any two possible values is some constant d < 1 (so that values with difference smaller than d are considered equal). The weight w_t of f_{[t]}(x) is set to approximately d^t . This guarantees that maximizing the weighted sum \sum_t w_t f_{[t]}(x) is equivalent to lexmaxmin.

Algorithms for discrete variables

If the set of vectors is discrete, and the domain is sufficiently small, then it is possible to use one of the functions representing the leximin order, and maximize it subject to the constraints, using a solver for constraint-satisfaction problems.

But if the domain is large, the above approach becomes unfeasible due to the large number of possible values that this function can have: {m+n-1 \choose n}, where m is the number of different values in the domain, and n is the number of variables.{{Cite journal |last1=Bouveret |first1=Sylvain |last2=Lemaître |first2=Michel |date=2009-02-01 |title=Computing leximin-optimal solutions in constraint networks |journal=Artificial Intelligence |language=en |volume=173 |issue=2 |pages=343–364 |doi=10.1016/j.artint.2008.10.010 |issn=0004-3702 |doi-access=free}}

Bouveret and Lemaître present five different algorithms for finding leximin-optimal solutions to discrete constraint-satisfaction problems:

  1. Branch and bound based on the LEXIMIN constraint - a constraint on two vectors x and y, saying that y is leximin-larger than x.
  2. Branching on saturated subsets - finding subsets of variables that must be fixed at the minimum value, and finding the maximum-minimum value for the other variables.
  3. Using the SORT constraint - a constraint on two vectors x and y, saying that y contains the same elements as x sorted in ascending order. This constraint can be computed efficiently by several algorithms.{{Cite book |last1=Guernalec |first1=Noëlle Bleuzen |last2=Colmerauer |first2=Alain |title=Principles and Practice of Constraint Programming-CP97 |chapter=Narrowing a 2n-block of sortings in O (N logn) |date=1997 |editor-last=Smolka |editor-first=Gert |chapter-url=https://link.springer.com/chapter/10.1007/BFb0017426 |series=Lecture Notes in Computer Science |language=en |location=Berlin, Heidelberg |publisher=Springer |volume=1330 |pages=2–16 |doi=10.1007/BFb0017426 |isbn=978-3-540-69642-1}}{{Cite book |last1=Mehlhorn |first1=Kurt |last2=Thiel |first2=Sven |title=Principles and Practice of Constraint Programming – CP 2000 |chapter=Faster Algorithms for Bound-Consistency of the Sortedness and the Alldifferent Constraint |date=2000 |editor-last=Dechter |editor-first=Rina |chapter-url=https://link.springer.com/chapter/10.1007/3-540-45349-0_23 |series=Lecture Notes in Computer Science |language=en |location=Berlin, Heidelberg |publisher=Springer |volume=1894 |pages=306–319 |doi=10.1007/3-540-45349-0_23 |isbn=978-3-540-45349-9}}
  4. Using the ATLEAST constraint.
  5. Using max-min transformations.

In their experiments, the best-performing approach was 4 (ATLEAST), followed by 3 (SORT) followed by 1 (LEXIMIN).

Dall'aglio{{Cite journal |last1=Dall'Aglio |first1=Marco |date=2001-05-01 |title=The Dubins–Spanier optimization problem in fair division theory |journal=Journal of Computational and Applied Mathematics |language=en |volume=130 |issue=1–2 |pages=17–40 |bibcode=2001JCoAM.130...17D |doi=10.1016/S0377-0427(99)00393-3 |issn=0377-0427 |doi-access=free}} presents an algorithm for computing a leximin-optimal resource allocation.

References