Pointer jumping
{{Short description|Design technique}}
{{moresources|date=December 2019}}
Pointer jumping or path doubling is a design technique for parallel algorithms that operate on pointer structures, such as linked lists and directed graphs. Pointer jumping allows an algorithm to follow paths with a time complexity that is logarithmic with respect to the length of the longest path. It does this by "jumping" to the end of the path computed by neighbors.
The basic operation of pointer jumping is to replace each neighbor in a pointer structure with its neighbor's neighbor. In each step of the algorithm, this replacement is done for all nodes in the data structure, which can be done independently in parallel. In the next step when a neighbor's neighbor is followed, the neighbor's path already followed in the previous step is added to the node's followed path in a single step. Thus, each step effectively doubles the distance traversed by the explored paths.
Pointer jumping is best understood by looking at simple examples such as list ranking and root finding.
List ranking
One of the simpler tasks that can be solved by a pointer jumping algorithm is the list ranking problem. This problem is defined as follows: given a linked list of {{mvar|N}} nodes, find the distance (measured in the number of nodes) of each node to the end of the list. The distance {{mono|d(n)}} is defined as follows, for nodes {{mono|n}} that point to their successor by a pointer called {{mono|next}}:
- If {{mono|n.next}} is Null pointer, then {{mono|d(n) {{=}} 0}}.
- For any other node, {{mono|d(n) {{=}} d(n.next) + 1}}.
This problem can easily be solved in linear time on a sequential machine, but a parallel algorithm can do better: given {{mvar|n}} processors, the problem can be solved in logarithmic time, {{math|O(log N)}}, by the following pointer jumping algorithm:{{Introduction to Algorithms|edition=2}}{{rp|693}}
{{framebox|blue}}
- Allocate an array of {{mvar|N}} integers.
- Initialize: for each processor/list node {{mvar|n}}, in parallel:
- If {{mono|n.next {{=}} nil}}, set {{mono|d[n] ← 0}}.
- Else, set {{mono|d[n] ← 1}}.
- While any node {{mono|n}} has {{mono|n.next ≠ nil}}:
- For each processor/list node {{mvar|n}}, in parallel:
- If {{mono|n.next ≠ nil}}:
- Set {{mono|d[n] ← d[n] + d[n.next]}}.
- Set {{mono|n.next ← n.next.next}}.
{{frame-footer}}
The pointer jumping occurs in the last line of the algorithm, where each node's {{mono|next}} pointer is reset to skip the node's direct successor. It is assumed, as in common in the PRAM model of computation, that memory access are performed in lock-step, so that each {{mono|n.next.next}} memory fetch is performed before each {{mono|n.next}} memory store; otherwise, processors may clobber each other's data, producing inconsistencies.{{r|clrs}}{{rp|694}}
The following diagram follows how the parallel list ranking algorithm uses pointer jumping for a linked list with 11 elements. As the algorithm describes, the first iteration starts initialized with all ranks set to 1 except those with a null pointer for {{mono|next}}. The first iteration looks at immediate neighbors. Each subsequent iteration jumps twice as far as the previous.
An example of performing the parallel pointer jumping technique to compute list ranking.
Analyzing the algorithm yields a logarithmic running time. The initialization loop takes constant time, because each of the {{mvar|N}} processors performs a constant amount of work, all in parallel. The inner loop of the main loop also takes constant time, as does (by assumption) the termination check for the loop, so the running time is determined by how often this inner loop is executed. Since the pointer jumping in each iteration splits the list into two parts, one consisting of the "odd" elements and one of the "even" elements, the length of the list pointed to by each processor's {{mono|n}} is halved in each iteration, which can be done at most {{math|O(log N)}} time before each list has a length of at most one.{{r|clrs}}{{rp|694–695}}
Root finding
{{Unreferenced section|date=January 2020}}
Following a path in a graph is an inherently serial operation, but pointer jumping reduces the total amount of work by following all paths simultaneously and sharing results among dependent operations. Pointer jumping iterates and finds a successor — a vertex closer to the tree root — each time. By following successors computed for other vertices, the traversal down each path can be doubled every iteration, which means that the tree roots can be found in logarithmic time.
Pointer doubling operates on an array successor
with an entry for every vertex in the graph. Each successor[i]
is initialized with the parent index of vertex i
if that vertex is not a root or to i
itself if that vertex is a root. At each iteration, each successor is updated to its successor's successor. The root is found when the successor's successor points to itself.
The following pseudocode demonstrates the algorithm.
algorithm
Input: An array parent representing a forest of trees. parent[i] is the parent of vertex i or itself for a root
Output: An array containing the root ancestor for every vertex
for i ← 1 to length(parent) do in parallel
successor[i] ← parent[i]
while true
for i ← 1 to length(successor) do in parallel
successor_next[i] ← successor[successor[i]]
if successor_next = successor then
break
for i ← 1 to length(successor) do in parallel
successor[i] ← successor_next[i]
return successor
The following image provides an example of using pointer jumping on a small forest. On each iteration the successor points to the vertex following one more successor. After two iterations, every vertex points to its root node.
History and examples
Although the name pointer jumping would come later, JáJá{{cite book |first=Joseph |last=JáJá |title=An Introduction to Parallel Algorithms |publisher=Addison Wesley |year=1992 |isbn=0-201-54856-9}}{{rp|88}} attributes the first uses of the technique in early parallel graph algorithms{{cite book |last1=Hirschberg |first1=D. S. |title=Proceedings of the eighth annual ACM symposium on Theory of computing - STOC '76 |chapter=Parallel algorithms for the transitive closure and the connected component problems |date=1976 |pages=55–57 |doi=10.1145/800113.803631|s2cid=306043 }}{{cite thesis |last=Savage |first=Carla Diane |date=1977 |title=Parallel Algorithms for Graph Theoretic Problems |publisher=University of Illinois at Urbana-Champaign |url=https://apps.dtic.mil/docs/citations/ADA056888|archive-url=https://web.archive.org/web/20220601073719/https://apps.dtic.mil/docs/citations/ADA056888|url-status=live|archive-date=June 1, 2022}}{{rp|43}} and list ranking.{{cite thesis |last=Wylie |first=James C. |date=1979 |title=The Complexity of Parallel Computations |chapter=Chapter 4: Computational Structures |publisher=Cornell University |chapter-url=https://ecommons.cornell.edu/handle/1813/7502}} The technique has been described with other names such as shortcutting,{{cite journal |last1=Shiloach |first1=Yossi |last2=Vishkin |first2=Uzi |date=1982 |title=An O(log n) Parallel Connectivity Algorithm |journal=Journal of Algorithms |volume=3 |issue=1 |pages=57–67 |doi=10.1016/0196-6774(82)90008-6}}{{cite book |last1=Tarjan |first1=Robert E |last2=Vishkin |first2=Uzi |title=25th Annual Symposium on Foundations of Computer Science, 1984 |chapter=Finding biconnected components and computing tree functions in logarithmic parallel time |date=1984 |pages=12–20 |doi=10.1109/SFCS.1984.715896|isbn=0-8186-0591-X }} but by the 1990s textbooks on parallel algorithms consistently used the term pointer jumping.{{r|JaJa}}{{rp|52-56}}{{r|clrs}}{{rp|692-701}}{{cite book |last=Quinn |first=Michael J. |date=1994 |title=Parallel Computing: Theory and Practice |edition=2 |publisher=McGraw-Hill |isbn=0-07-051294-9}}{{rp|34-35}} Today, pointer jumping is considered a software design pattern for operating on recursive data types in parallel.{{cite book |last1=Mattson |first1=Timothy G. |last2=Sanders |first2=Beverly A. |last3=Massingill |first3=Berna L. |date=2005 |title=Patterns for Parallel Programming |publisher=Addison-Wesley |isbn=0-321-22811-1}}{{rp|99}}
As a technique for following linked paths, graph algorithms are a natural fit for pointer jumping. Consequently, several parallel graph algorithms utilizing pointer jumping have been designed. These include algorithms for finding the roots of a forest of rooted trees,{{r|JaJa}}{{rp|52-53}}{{r|Shiloach}} connected components,{{r|JaJa}}{{rp|213-221}} minimum spanning trees{{r|JaJa}},{{rp|222-227}}{{cite book |last1=Chung |first1=Sun |last2=Condon |first2=Anne |title=Proceedings of International Conference on Parallel Processing |chapter=Parallel implementation of Bouvka's minimum spanning tree algorithm |date=1996 |pages=302–308 |doi= 10.1109/IPPS.1996.508073|isbn=0-8186-7255-2 |s2cid=12710022 }} and biconnected components{{r|JaJa}}{{rp|227-239}}{{r|Tarjan}}. However, pointer jumping has also shown to be useful in a variety of other problems including computer vision,{{cite journal |last1=Little |first1=James J. |last2=Blelloch |first2=Guy E. |last3=Cass |first3=Todd A. |date=1989 |title=Algorithmic Techniques for Computer Vision on a Fine-Grained Parallel Machine |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=11 |issue=3 |pages=244–257 |doi=10.1109/34.21793}} image compression,{{cite book |last1=Cook |first1=Gregory W. |last2=Delp |first2=Edward J. |title=Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing |chapter=An investigation of JPEG image and video compression using parallel processing |year=1994 |pages=437–440 |doi=10.1109/ICASSP.1994.389394|isbn=0-7803-1775-0 |s2cid=8879246 }} and Bayesian inference.{{cite conference |last1=Namasivayam |first1=Vasanth Krishna |last2=Prasanna |first2=Viktor K. |date=2006 |title=Scalable Parallel Implementation of ExactInference in Bayesian Networks |conference=12th International Conference on Parallel and Distributed Systems - (ICPADS'06) |pages=8 pp |doi=10.1109/ICPADS.2006.96 |isbn=0-7695-2612-8 |s2cid=15728730 }}
References
{{reflist}}
{{Parallel computing}}