depth-first search#Vertex orderings

{{short description|Search algorithm}}

{{Refimprove|date=July 2010}}

{{Infobox algorithm

|class=Search algorithm

|image=Depth-first-tree.svg

|caption=A tree labeled by the order in which DFS expands its nodes

|data=Graph

|time=O(|V| + |E|) for explicit graphs traversed without repetition, O(b^d) for implicit graphs with branching factor b searched to depth d

|space=O(|V|) if entire graph is traversed without repetition, O(longest path length searched) = O(bd)for implicit graphs without elimination of duplicate nodes

|complete=yes (unless infinite paths are possible)

|optimal=no (does not generally find shortest paths)

}}

Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually a stack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph.

A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre TrémauxCharles Pierre Trémaux (1859–1882) École polytechnique of Paris (X:1876), French engineer of the telegraph
in Public conference, December 2, 2010 – by professor Jean Pelletier-Thibert in Académie de Macon (Burgundy – France) – (Abstract published in the Annals academic, March 2011 – {{ISSN|0980-6032}})
as a strategy for solving mazes.{{citation|title=Graph Algorithms|first=Shimon|last=Even|author-link=Shimon Even|edition=2nd|publisher=Cambridge University Press|year=2011|isbn=978-0-521-73653-4|pages=46–48|url=https://books.google.com/books?id=m3QTSMYm5rkC&pg=PA46}}.{{citation|title=Algorithms in C++: Graph Algorithms|first=Robert|last=Sedgewick|edition=3rd|publisher=Pearson Education|year=2002|isbn=978-0-201-36118-6}}.

Properties

The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time {{nowrap|1=O(|V| + |E|), Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. p.606}} where |V| is the number of vertices and |E| the number of edges. This is linear in the size of the graph. In these applications it also uses space O(|V|) in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as for breadth-first search and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce.

For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level.

DFS may also be used to collect a sample of graph nodes. However, incomplete DFS, similarly to incomplete BFS, is biased towards nodes of high degree.

Example

File:Depth-First-Search.gif

For the following graph:

File:graph.traversal.example.svg

a depth-first search starting at the node A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory.

Performing the same search without remembering previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G.

Iterative deepening is one technique to avoid this infinite loop and would reach all nodes.

=Vertex orderings=

It is also possible to use depth-first search to linearly order the vertices of a graph or tree. There are four possible ways of doing this:

  • A preordering is a list of the vertices in the order that they were first visited by the depth-first search algorithm. This is a compact and natural way of describing the progress of the search, as was done earlier in this article. A preordering of an expression tree is the expression in Polish notation.
  • A postordering is a list of the vertices in the order that they were last visited by the algorithm. A postordering of an expression tree is the expression in reverse Polish notation.
  • A reverse preordering is the reverse of a preordering, i.e. a list of the vertices in the opposite order of their first visit. Reverse preordering is not the same as postordering.
  • A reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the opposite order of their last visit. Reverse postordering is not the same as preordering.

For binary trees there is additionally in-ordering and reverse in-ordering.

For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D.

: alt=A directed graph with edges AB, BD, AC, CD

Reverse postordering produces a topological sorting of any directed acyclic graph. This ordering is also useful in control-flow analysis as it often represents a natural linearization of the control flows. The graph above might represent the flow of control in the code fragment below, and it is natural to consider this code in the order A B C D or A C B D but not natural to use the order A B D C or A C D B.

if (A) then {

B

} else {

C

}

D

Pseudocode

{{Tree traversal demo|method=pre-order|noselectmethod=1|caption=Interactive depth-first search demonstration}}

A recursive implementation of DFS:Goodrich and Tamassia; Cormen, Leiserson, Rivest, and Stein

procedure DFS(G, v) is

label v as discovered

for all directed edges from v to w that are in G.adjacentEdges(v) do

if vertex w is not labeled as discovered then

recursively call DFS(G, w)

A non-recursive implementation of DFS with worst-case space complexity O(|E|), with the possibility of duplicate vertices on the stack:Page 93, Algorithm Design, Kleinberg and Tardos

procedure DFS_iterative(G, v) is

let S be a stack

S.push(v)

while S is not empty do

v = S.pop()

if v is not labeled as discovered then

label v as discovered

for all edges from v to w in G.adjacentEdges(v) do

S.push(w)

File:Graph.traversal.example.svg

These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor of v visited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G.

The non-recursive implementation is similar to breadth-first search but differs from it in two ways:

  1. it uses a stack instead of a queue, and
  2. it delays checking whether a vertex has been discovered until the vertex is popped from the stack rather than making this check before adding the vertex.

If {{mvar|G}} is a tree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one.{{Cite web|title=Stack-based graph traversal ≠ depth first search|url=https://11011110.github.io/blog/2013/12/17/stack-based-graph-traversal.html|access-date=2020-06-10|website=11011110.github.io}}

Another possible implementation of iterative depth-first search uses a stack of iterators of the list of neighbors of a node, instead of a stack of nodes. This yields the same traversal as recursive DFS.{{Cite book|last=Sedgewick, Robert|url=http://worldcat.org/oclc/837386973|title=Algorithms in Java.|date=2010|publisher=Addison-Wesley|isbn=978-0-201-36121-6|oclc=837386973}}

procedure DFS_iterative(G, v) is

let S be a stack

label v as discovered

S.push(iterator of G.adjacentEdges(v))

while S is not empty do

if S.peek().hasNext() then

w = S.peek().next()

if w is not labeled as discovered then

label w as discovered

S.push(iterator of G.adjacentEdges(w))

else

S.pop()

Applications

File:MAZE 30x20 DFS.ogv

Algorithms that use depth-first search as a building block include:

| last1 = Hopcroft | first1 = John | author1-link = John Hopcroft

| last2 = Tarjan | first2 = Robert E. | author2-link = Robert Tarjan

| doi = 10.1145/321850.321852

| issue = 4

| journal = Journal of the Association for Computing Machinery

| pages = 549–568

| title = Efficient planarity testing

| volume = 21

| year = 1974| url = https://ecommons.cornell.edu/bitstream/1813/6011/1/73-165.pdf

| hdl = 1813/6011

| s2cid = 6279825 | hdl-access = free

}}.{{citation

| last1 = de Fraysseix | first1 = H.

| last2 = Ossona de Mendez | first2 = P. | author2-link = Patrice Ossona de Mendez

| last3 = Rosenstiehl | first3 = P. | author3-link = Pierre Rosenstiehl

| journal = International Journal of Foundations of Computer Science

| pages = 1017–1030

| title = Trémaux Trees and Planarity

| volume = 17

| year = 2006

| doi = 10.1142/S0129054106004248

| issue = 5| arxiv=math/0610935| bibcode = 2006math.....10935D

| s2cid = 40107560

}}.

  • Solving puzzles with only one solution, such as mazes. (DFS can be adapted to find all solutions to a maze by only including nodes on the current path in the visited set.)
  • Maze generation may use a randomized DFS.
  • Finding biconnectivity in graphs.
  • Succession to the throne shared by the Commonwealth realms.{{citation|last1=Baccelli|first1=Francois|last2=Haji-Mirsadeghi|first2=Mir-Omid|last3=Khezeli|first3=Ali|editor-last=Sobieczky|editor-first=Florian|contribution=Eternal family trees and dynamics on unimodular random graphs|doi=10.1090/conm/719/14471|location=Providence, Rhode Island|mr=3880014|pages=85–127|publisher=American Mathematical Society|series=Contemporary Mathematics|title=Unimodularity in Randomly Generated Graphs: AMS Special Session, October 8–9, 2016, Denver, Colorado|volume=719|year=2018|arxiv=1608.05940 |isbn=978-1-4704-3914-9 |s2cid=119173820 }}; see [https://books.google.com/books?id=7dV7DwAAQBAJ&pg=PA93 Example 3.7, p. 93]

Complexity

The computational complexity of DFS was investigated by John Reif. More precisely, given a graph G, let O=(v_1,\dots,v_n) be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. A decision version of the problem (testing whether some vertex {{mvar|u}} occurs before some vertex {{mvar|v}} in this order) is P-complete,{{Cite journal| doi = 10.1016/0020-0190(85)90024-9| title = Depth-first search is inherently sequential| journal = Information Processing Letters| volume = 20| issue = 5| year = 1985| last1 = Reif | first1 = John H. | pages = 229–234}} meaning that it is "a nightmare for parallel processing".{{cite book |last1=Mehlhorn |first1=Kurt |author1-link=Kurt Mehlhorn|first2=Peter |last2=Sanders|author2-link=Peter Sanders (computer scientist) |title=Algorithms and Data Structures: The Basic Toolbox |publisher=Springer |year=2008 |url=http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/GraphTraversal.pdf |archive-url=https://web.archive.org/web/20150908084757/http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/GraphTraversal.pdf |archive-date=2015-09-08 |url-status=live}}{{rp|189}}

A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity class RNC.{{citation

| last1 = Aggarwal | first1 = A.

| last2 = Anderson | first2 = R. J.

| doi = 10.1007/BF02122548

| issue = 1

| journal = Combinatorica

| mr = 951989

| pages = 1–12

| title = A random NC algorithm for depth first search

| volume = 8

| year = 1988| s2cid = 29440871

}}. As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC.{{citation

| last1 = Karger | first1 = David R. | author1-link = David Karger

| last2 = Motwani | first2 = Rajeev | author2-link = Rajeev Motwani

| doi = 10.1137/S0097539794273083

| issue = 1

| journal = SIAM Journal on Computing

| mr = 1431256

| pages = 255–272

| title = An NC algorithm for minimum cuts

| volume = 26

| year = 1997| citeseerx = 10.1.1.33.1701}}.

See also

Notes

{{reflist}}

References

{{refbegin}}

| last1=Goodrich

| first1=Michael T.

| author1-link=Michael T. Goodrich

| last2=Tamassia

| first2=Roberto

| author2-link = Roberto Tamassia

| title=Algorithm Design: Foundations, Analysis, and Internet Examples

| publisher=Wiley

| year=2001

| isbn=0-471-38365-1

}}

  • {{citation|title=Algorithm Design|first1=Jon|last1=Kleinberg|author1-link=Jon Kleinberg|first2=Éva|last2=Tardos|author2-link=Éva Tardos|publisher=Addison Wesley|year=2006|pages=92–94}}
  • {{Citation

| last=Knuth

| first=Donald E.

| author-link=Donald Knuth

| title=The Art of Computer Programming Vol 1. 3rd ed

| publisher=Addison-Wesley

| place=Boston

| year=1997

| isbn=0-201-89683-4

| url=http://www-cs-faculty.stanford.edu/~knuth/taocp.html

| oclc=155842391

| access-date=2008-02-12

| archive-date=2008-09-04

| archive-url=https://web.archive.org/web/20080904163709/http://www-cs-faculty.stanford.edu/~knuth/taocp.html

| url-status=dead

}}

{{refend}}