backjumping
{{Short description|In backtracking algorithms, technique that reduces search space}}{{Nf|date=November 2024}}
In constraint programming and SAT solving, backjumping (also known as non-chronological backtrackingMöhle, S., & Biere, A. (2019). Backing backtracking. In Theory and Applications of Satisfiability Testing–SAT 2019: 22nd International Conference, SAT 2019, Lisbon, Portugal, July 9–12, 2019, Proceedings 22 (pp. 250-266). Springer International Publishing. or intelligent backtrackingDechter, Rina (2003). Constraint Processing. Morgan Kaufmann. ) is an enhancement for backtracking algorithms which reduces the search space. While backtracking always goes up one level in the search tree when all values for a variable have been tested, backjumping may go up more levels. In this article, a fixed order of evaluation of variables is used, but the same considerations apply to a dynamic order of evaluation.
Image:Backtracking-no-backjumping.svg|A search tree visited by regular backtracking
Image:Backtracking-with-backjumping.svg|A backjump: the grey node is not visited
Definition
Whenever backtracking has tried all values for a variable without finding any solution, it reconsiders the last of the previously assigned variables, changing its value or further backtracking if no other values are to be tried. If is the current partial assignment and all values for have been tried without finding a solution, backtracking concludes that no solution extending
exists. The algorithm then "goes up" to , changing 's value if possible, backtracking again otherwise.
The partial assignment is not always necessary in full to prove that no value of leads to a solution. In particular, a prefix of the partial assignment may have the same property, that is, there exists an index
Backjump-variables-1.svg|An example in which the current assignment to
Backjump-variables-2.svg|Instead of backtracking, the algorithm makes some further elaboration, proving that the evaluations
Backjump-variables-3.svg|As a result, the current evaluation of
The efficiency of a backjumping algorithm depends on how high it is able to backjump. Ideally, the algorithm could jump from
Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, which is what the algorithm is trying to find. In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe. These methods have different costs, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree.
Backjumping at leaf nodes
The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching. In constraint satisfaction, a partial evaluation is consistent if and only if it satisfies all constraints involving the assigned variables, and inconsistent otherwise. It might be the case that a consistent partial solution cannot be extended to a consistent complete solution because some of the unassigned variables may not be assigned without violating other constraints.
The condition in which all values of a given variable
The backjumping algorithm by John Gaschnig does a backjump only in leaf dead ends.Gaschnig, J. 1977. A general backtrack algorithm that eliminates most redundant tests.
IJCAI-77, vol. 1, 457 In other words, it works differently from backtracking only when every possible value of
A safe jump can be found by simply evaluating, for every value
cellpadding=10 |
| ... | | | |
| ... | | | |
... |
| | | | |
The smallest index (lowest the listing) for which evaluations are inconsistent would be a safe jump if
In practice, the algorithm can check the evaluations above at the same time it is checking the consistency of
Backjumping at internal nodes
The previous algorithm only backjumps when the values of a variable can be shown inconsistent with the current partial solution without further branching. In other words, it allows for a backjump only at leaf nodes in the search tree.
An internal node of the search tree represents an assignment of a variable that is consistent with the previous ones. If no solution extends this assignment, the previous algorithm always backtracks: no backjump is done in this case.
Backjumping at internal nodes cannot be done as for leaf nodes. Indeed, if some evaluations of
In such cases, what proved an evaluation
This return is due to a number of dead ends, points where the algorithm has proved a partial solution inconsistent. In order to further backjump, the algorithm has to take into account that the impossibility of finding solutions is due to these dead ends. In particular, the safe jumps are indexes of prefixes that still make these dead ends to be inconsistent partial solutions.
Dead-ends-1.svg|In this example, the algorithm come back to
Dead-ends-1a.svg|The second point remains inconsistent even if the values of
Dead-ends-2.svg|The other inconsistent evaluations remains so even without
Dead-ends-3.svg|The algorithm can backjump to
In other words, when all values of
=Simplifications=
Due to the potentially high number of nodes that are in the subtree of
The second simplification is that nodes in the subtree of
Indeed, if an algorithm went down from node
This fact can be exploited by collecting, in each node, a set of previously assigned variables whose evaluation suffices to prove that no solution exists in the subtree rooted at the node. This set is built during the execution of the algorithm. When retracting from a node, this set is removed the variable of the node and collected in the set of the destination of backtracking or backjumping. Since nodes that are skipped from backjumping are never retracted from, their sets are automatically ignored.
=Graph-based backjumping=
The rationale of graph-based backjumping is that a safe jump can be found by checking which of the variables
The fact that nodes skipped by backjumping can be ignored when considering a further backjump can be exploited by the following algorithm. When retracting from a leaf node, the set of variables that are in constraint with it is created and "sent back" to its parent, or ancestor in case of backjumping. At every internal node, a set of variables is maintained. Every time a set of variables is received from one of its children or descendants, their variables are added to the maintained set. When further backtracking or backjumping from the node, the variable of the node is removed from this set, and the set is sent to the node that is the destination of backtracking or backjumping. This algorithm works because the set maintained in a node collects all variables that are relevant to prove unsatisfiability in the leaves that are descendants of this node. Since sets of variables are only sent when retracing from nodes, the sets collected at nodes skipped by backjumping are automatically ignored.
=Conflict-based backjumping=
Conflict-based backjumping ({{A.k.a.}} conflict-directed backjumping) is a more refined algorithm and sometimes able to achieve larger backjumps. It is based on checking not only the common presence of two variables in the same constraint but also on whether the constraint actually caused any inconsistency. In particular, this algorithm collects one of the violated constraints in every leaf. At every node, the highest index of a variable that is in one of the constraints collected at the leaves is a safe jump.
While the violated constraint chosen in each leaf does not affect the safety of the resulting jump, choosing constraints of highest possible indices increases the highness of the jump. For this reason, conflict-based backjumping orders constraints in such a way that constraints over lower indices variables are preferred over constraints on higher index variables.
Formally, a constraint
In a leaf node, the algorithm chooses the lowest index
In practice, this algorithm is simplified by collecting all indices in a single set, instead of creating a set for every value of
Conflict-directed backjumping was proposed for Constraint Satisfaction Problems by Patrick Prosser in his seminal 1993 paper Prosser, Patrick (1993). "Hybrid Algorithms for the Constraint Satisfaction Problem" (PDF). Computational Intelligence 9(3).
See also
References
{{reflist}}
Bibliography
- {{cite book
| first=Rina
| last=Dechter
| title=Constraint Processing
| publisher=Morgan Kaufmann
| url=https://archive.org/details/constraintproces00rina
| year=2003
| isbn=1-55860-890-7
| url-access=registration
}}
- {{cite web
| first=Patrick
| last=Prosser
| title=Hybrid Algorithms for the Constraint Satisfaction Problem
| publisher=Computational Intelligence 9(3)
| url=http://cse.unl.edu/~choueiry/Documents/Hybrid-Prosser.pdf
| year=1993
}}