Shellsort
{{Short description|Sorting algorithm which uses multiple comparison intervals}}
{{Infobox Algorithm
|class=Sorting algorithm
|image=Step-by-step visualisation of Shellsort
|caption=Shellsort with gaps 23, 10, 4, 1 in action
|data=Array
|time=O(n2) (worst known worst case gap sequence)
O(n log2n) (best known worst case gap sequence){{Cite book
|last=Pratt
|first=Vaughan Ronald |author-link=Vaughan Ronald Pratt
|year=1979
|publisher=Garland
|title=Shellsort and Sorting Networks (Outstanding Dissertations in the Computer Sciences)
|url=https://apps.dtic.mil/sti/pdfs/AD0740110.pdf
|archive-url=https://web.archive.org/web/20210907132436/https://apps.dtic.mil/sti/pdfs/AD0740110.pdf
|url-status=live
|archive-date=7 September 2021
|isbn=978-0-8240-4406-0}}
|best-time=O(n log n) (most gap sequences)
O(n log2n) (best known worst-case gap sequence){{cite web |title=Shellsort & Comparisons |url=http://www.cs.wcupa.edu/rkline/ds/shell-comparison.html |access-date=14 November 2015 |archive-date=20 December 2019 |archive-url=https://web.archive.org/web/20191220040546/https://www.cs.wcupa.edu/rkline/ds/shell-comparison.html |url-status=dead }}
|average-time=depends on gap sequence
|space=О(n) total, O(1) auxiliary
|optimal=No
}}
File:Shell sorting algorithm color bars.svg
Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort. It can be understood as either a generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort). The method starts by sorting pairs of elements far apart from each other, then progressively reducing the gap between elements to be compared. By starting with far-apart elements, it can move some out-of-place elements into the position faster than a simple nearest-neighbor exchange.
The running time of Shellsort is heavily dependent on the gap sequence it uses. For many practical variants, determining their time complexity remains an open problem.
The algorithm was first published by Donald Shell in 1959, and has nothing to do with shells.{{Cite journal
|url=http://penguin.ewu.edu/cscd300/Topic/AdvSorting/p30-shell.pdf
|last=Shell
|first=D. L.
|title=A High-Speed Sorting Procedure
|journal=Communications of the ACM
|volume=2
|issue=7
|year=1959
|pages=30–32
|doi=10.1145/368370.368387
|s2cid=28572656
|access-date=18 October 2011
|archive-date=30 August 2017
|archive-url=https://web.archive.org/web/20170830020037/http://penguin.ewu.edu/cscd300/Topic/AdvSorting/p30-shell.pdf
|url-status=dead
}}Some older textbooks and references call this the "Shell–Metzner" sort after Marlene Metzner Norton, but according to Metzner, "I had nothing to do with the sort, and my name should never have been attached to it." See {{Cite web
|title=Shell sort
|publisher=National Institute of Standards and Technology
|url=https://xlinux.nist.gov/dads/HTML/shellsort.html
|access-date=2007-07-17 }}
Description
Shellsort is an optimization of insertion sort that allows the exchange of items that are far apart. The idea is to arrange the list of elements so that, starting anywhere, taking every hth element produces a sorted list. Such a list is said to be h-sorted. It can also be thought of as h interleaved lists, each individually sorted.
{{Cite book
|last=Sedgewick
|first=Robert
|author-link=Robert Sedgewick (computer scientist)
|title=Algorithms in C
|edition=3rd
|volume=1
|publisher=Addison-Wesley
|year=1998
|pages=[https://archive.org/details/algorithmsinc00sedg/page/273 273–281]
|isbn=978-0-201-31452-6
|url-access=registration
|url=https://archive.org/details/algorithmsinc00sedg/page/273
}} Beginning with large values of h allows elements to move long distances in the original list, reducing large amounts of disorder quickly, and leaving less work for smaller h-sort steps to do.
{{Cite book
|last1=Kernighan
|first1=Brian W.
|author-link1=Brian Kernighan
|last2=Ritchie
|first2=Dennis M.
|author-link2=Dennis Ritchie
|title=The C Programming Language
|edition=2nd
|publisher=Prentice Hall
|year=1996
|pages=62
|isbn=978-7-302-02412-5
}} If the list is then k-sorted for some smaller integer k, then the list remains h-sorted. A final sort with h = 1 ensures the list is fully sorted at the end, but a judiciously chosen decreasing sequence of h values leaves very little work for this final pass to do.
In simplistic terms, this means if we have an array of 1024 numbers, our first gap (h) could be 512. We then run through the list comparing each element in the first half to the element in the second half. Our second gap (k) is 256, which breaks the array into four sections (starting at 0, 256, 512, 768), and we make sure the first items in each section are sorted relative to each other, then the second item in each section, and so on. In practice the gap sequence could be anything, but the last gap is always 1 to finish the sort (effectively finishing with an ordinary insertion sort).
An example run of Shellsort with gaps 5, 3 and 1 is shown below.
class="wikitable" style="text-align:center"
! ! {{mvar|a}}1 | {{mvar|a}}2 | {{mvar|a}}3 | {{mvar|a}}4
! {{mvar|a}}5 | {{mvar|a}}6 | {{mvar|a}}7 | {{mvar|a}}8
! {{mvar|a}}9 | {{mvar|a}}10 | {{mvar|a}}11 | {{mvar|a}}12 |
Input data
| 62 || 83 || 18 || 53 || 07 || 17 || 95 || 86 || 47 || 69 || 25 || 28 | |||||||||
---|---|---|---|---|---|---|---|---|---|
After 5-sorting
| 17 || 28 || 18 || 47 || 07 |bgcolor=lightcyan| 25 ||bgcolor=lightcyan| 83 ||bgcolor=lightcyan| 86 ||bgcolor=lightcyan| 53 ||bgcolor=lightcyan| 69 | 62 || 95 | |||||||||
After 3-sorting
| 17 || 07 || 18 |bgcolor=lightcyan| 47 ||bgcolor=lightcyan| 28 ||bgcolor=lightcyan| 25 | 69 || 62 || 53 |bgcolor=lightcyan| 83 ||bgcolor=lightcyan| 86 ||bgcolor=lightcyan| 95 | |||||||||
After 1-sorting
| 07 ||bgcolor=lightcyan| 17 || 18 ||bgcolor=lightcyan| 25 || 28 ||bgcolor=lightcyan| 47 || 53 ||bgcolor=lightcyan| 62 || 69 ||bgcolor=lightcyan| 83 || 86 ||bgcolor=lightcyan| 95 |
The first pass, 5-sorting, performs insertion sort on five separate subarrays (a1, a6, a11), (a2, a7, a12), (a3, a8), (a4, a9), (a5, a10). For instance, it changes the subarray (a1, a6, a11) from (62, 17, 25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the three subarrays (a1, a4, a7, a10), (a2, a5, a8, a11), (a3, a6, a9, a12). The last pass, 1-sorting, is an ordinary insertion sort of the entire array (a1,..., a12).
As the example illustrates, the subarrays that Shellsort operates on are initially short; later they are longer but almost ordered. In both cases insertion sort works efficiently.
Unlike insertion sort, Shellsort is not a stable sort since gapped insertions transport equal elements past one another and thus lose their original order. It is an adaptive sorting algorithm in that it executes faster when the input is partially sorted.
Pseudocode
Using Marcin Ciura's gap sequence, with an inner insertion sort.
- Sort an array a[0...n-1].
gaps = [701, 301, 132, 57, 23, 10, 4, 1] # Ciura gap sequence
- Start with the largest gap and work down to a gap of 1
- similar to insertion sort but instead of 1, gap is being used in each step
foreach (gap in gaps)
{
# Do a gapped insertion sort for every element in gaps
# Each loop leaves a[0..gap-1] in gapped order
for (i = gap; i < n; i += 1)
{
# save a[i] in temp and make a hole at position i
temp = a[i]
# shift earlier gap-sorted elements up until the correct location for a[i] is found
for (j = i; (j >= gap) && (a[j - gap] > temp); j -= gap)
{
a[j] = a[j - gap]
}
# put temp (the original a[i]) in its correct location
a[j] = temp
}
}
Gap sequences
The question of deciding which gap sequence to use is difficult. Every gap sequence that contains 1 yields a correct sort (as this makes the final pass an ordinary insertion sort); however, the properties of thus obtained versions of Shellsort may be very different. Too few gaps slows down the passes, and too many gaps produces an overhead.
The table below compares most proposed gap sequences published so far. Some of them have decreasing elements that depend on the size of the sorted array (N). Others are increasing infinite sequences, whose elements less than N should be used in reverse order.
class="wikitable" |
style="background-color: #efefef;"
! OEIS ! General term (k ≥ 1) ! Concrete gaps ! Worst-case ! Author and year of publication |
---
| | | \left\lfloor\frac{N}{2}\right\rfloor | [e.g. when N = 2p] | Shell, 1959 |
---
| | | \; \; 2 \left\lfloor\frac{N}{4}\right\rfloor + 1 | | Frank & Lazarus, 1960{{Cite journal |last1=Frank |first1=R. M. |last2=Lazarus |first2=R. B. |title=A High-Speed Sorting Procedure |journal=Communications of the ACM |volume=3 |issue=1 |year=1960 |pages=20–22 |doi=10.1145/366947.366957|s2cid=34066017 |doi-access=free }} |
---
| {{OEIS link|A000225}} | | | | Hibbard, 1963{{Cite journal |last=Hibbard |first=Thomas N. |title=An Empirical Study of Minimal Storage Sorting |journal=Communications of the ACM |volume=6 |issue=5 |year=1963 |pages=206–213 |doi=10.1145/366552.366557|s2cid=12146844 |doi-access=free }} |
---
| {{OEIS link|A083318}} | , prefixed with 1 | | | Papernov & Stasevich, 1965{{Cite journal |url=http://www.mathnet.ru/links/83f0a81df1ec06f76d3683c6cab7d143/ppi751.pdf |last1=Papernov |first1=A. A. |last2=Stasevich |first2=G. V. |title=A Method of Information Sorting in Computer Memories |journal=Problems of Information Transmission |volume=1 |issue=3 |year=1965 |pages=63–75}} |
---
| {{OEIS link|A003586}} | Successive numbers of the form (3-smooth numbers) | | | Pratt, 1971 |
---
| {{OEIS link|A003462}} | , not greater than | | | Knuth, 1973,{{Cite book |last=Knuth |first=Donald E. |author-link=Donald Knuth |title=The Art of Computer Programming. Volume 3: Sorting and Searching |edition=2nd |publisher=Addison-Wesley |location=Reading, Massachusetts |year=1997 |pages=83–95 |chapter=Shell's method |isbn=978-0-201-89685-5}} based on Pratt, 1971 |
---
| {{OEIS link|A036569}} | &\prod\limits_I a_q, \hbox{where} \\ a_0 = {} &3 \\ a_q = {} &\min\left\{n \in \mathbb{N}\colon n \ge \left(\frac{5}{2}\right)^{q+1}, \forall p\colon 0 \le p < q \Rightarrow \gcd(a_p, n) = 1\right\} \\ I = {} &\left\{0 \le q < r \mid q \neq \frac{1}{2}\left(r^2 + r\right) - k \right\} \\ r = {} &\left\lfloor \sqrt{2k + \sqrt{2k}} \right\rfloor \end{align} | | | Incerpi & Sedgewick, 1985,{{Cite journal |last1=Incerpi |first1=Janet |last2=Sedgewick |first2=Robert |author2-link=Robert Sedgewick (computer scientist) |title=Improved Upper Bounds on Shellsort |journal=Journal of Computer and System Sciences |volume=31 |issue=2 |year=1985 |pages=210–224 |doi=10.1016/0022-0000(85)90042-x|url=https://hal.inria.fr/inria-00076291/file/RR-0267.pdf }} Knuth |
---
| {{OEIS link|A036562}} | , prefixed with 1 | | |
---
| {{OEIS link|A033622}} | 9\left(2^{k} - 2^\frac{k}{2}\right) + 1 & k\text{ even}, \\ 8 \cdot 2^{k} - 6 \cdot 2^{(k+1)/2} + 1 & k\text{ odd} \end{cases} | | {{Cite journal |last=Sedgewick |first=Robert |author-link=Robert Sedgewick (computer scientist) |title=A New Upper Bound for Shellsort |journal=Journal of Algorithms |volume=7 |issue=2 |year=1986 |pages=159–173 |doi=10.1016/0196-6774(86)90001-5 }} |
---
| | | \left\lfloor \frac{5N-1}{11} \right\rfloor | {{unk}} | Gonnet & Ricardo Baeza-Yates, 1991{{Cite book |last1=Gonnet |first1=Gaston H. |last2=Baeza-Yates |first2=Ricardo |title=Handbook of Algorithms and Data Structures: In Pascal and C |publisher=Addison-Wesley |location=Reading, Massachusetts |edition=2nd |year=1991 |pages=161–163 |chapter=Shellsort |isbn=978-0-201-41607-7 |quote=Extensive experiments indicate that the sequence defined by {{math|1=α = 0.45454 < 5/11}} performs significantly better than other sequences. The easiest way to compute {{math|{{floor|0.45454n}}}} is by }} |
---
| {{OEIS link|A108870}} | (or equivalently, ) | | {{unk}} | Tokuda, 1992{{Cite book |editor-last=van Leeuven |editor-first=Jan |chapter=An Improved Shellsort |last=Tokuda |first=Naoyuki |title=Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture |publisher=North-Holland Publishing Co. |location=Amsterdam |year=1992 |pages=449–457 |isbn=978-0-444-89747-3}} (misquote per OEIS) |
---
| {{OEIS link|A102549}} | Unknown (experimentally derived) | | {{unk}} | Ciura, 2001{{Cite book |chapter-url=http://sun.aei.polsl.pl/~mciura/publikacje/shellsort.pdf |archive-url=https://web.archive.org/web/20180923235211/http://sun.aei.polsl.pl/~mciura/publikacje/shellsort.pdf |archive-date=23 September 2018 |title=Proceedings of the 13th International Symposium on Fundamentals of Computation Theory |editor-last=Freiwalds |editor-first=Rusins |last=Ciura |first=Marcin |chapter=Best Increments for the Average Case of Shellsort |publisher=Springer-Verlag |location=London |year=2001 |pages=106–117 |isbn=978-3-540-42487-1}} |
---
| {{OEIS link|A366726}} | | | {{unk}} | Lee, 2021{{cite arXiv | last = Lee | first = Ying Wai | eprint = 2112.11112 | title = Empirically Improved Tokuda Gap Sequence in Shellsort | class = cs.DS | date = 21 December 2021 }} |
---
| | | | {{unk}} | Skean, Ehrenborg, Jaromczyk, 2023{{cite arXiv | first1 = Oscar | last1 = Skean | first2 = Richard | last2 = Ehrenborg | first3 = Jerzy W. | last3 = Jaromczyk | eprint = 2301.00316 | title = Optimization Perspectives on Shellsort | class = cs.DS | date = 1 Jan 2023 }} |
When the binary representation of N contains many consecutive zeroes, Shellsort using Shell's original gap sequence makes Θ(N2) comparisons in the worst case. For instance, this case occurs for N equal to a power of two when elements greater and smaller than the median occupy odd and even positions respectively, since they are compared only in the last pass.
Although it has higher complexity than the O(N log N) that is optimal for comparison sorts, Pratt's version lends itself to sorting networks and has the same asymptotic gate complexity as Batcher's bitonic sorter.
Gonnet and Baeza-Yates observed that Shellsort makes the fewest comparisons on average when the ratios of successive gaps are roughly equal to 2.2. This is why their sequence with ratio 2.2 and Tokuda's sequence with ratio 2.25 prove efficient. However, it is not known why this is so. Sedgewick recommends using gaps which have low greatest common divisors or are pairwise coprime.{{Cite book
|title=Algorithms in C++, Parts 1–4: Fundamentals, Data Structure, Sorting, Searching
|last=Sedgewick
|first=Robert |author-link=Robert Sedgewick (computer scientist)
|chapter=Shellsort
|publisher=Addison-Wesley
|location=Reading, Massachusetts
|year=1998
|pages=285–292
|isbn=978-0-201-35088-3}}{{Fv|date=February 2021|reason=There's a lot of discussion of divisibility, but I couldn't find this explicitly stated. E.g. p. 289 says "The increment sequences that we have discussed to this point are effective because successive elements are relatively prime. Another family of increment sequences is effective precisely because successive elements are not relatively prime."}} Gaps which are odd numbers seem to work well in practice: 25% reductions have been observed by avoiding even-numbered gaps. Gaps which avoid multiples of 3 and 5 seem to produce small benefits of < 10%.{{OR|date=May 2023}}
With respect to the average number of comparisons, Ciura's sequence has the best known performance; gaps greater than 701 were not determined but the sequence can be further extended according to the recursive formula .
Tokuda's sequence, defined by the simple formula , where , , can be recommended for practical applications.
If the maximum input size is small, as may occur if Shellsort is used on small subarrays by another recursive sorting algorithm such as quicksort or merge sort, then it is possible to tabulate an optimal sequence for each input size.{{cite web
|title=How to choose the lengths of my sub sequences for a shell sort?
|first=Olof |last=Forshell
|date=22 May 2018
|url=https://stackoverflow.com/a/50470237
|website=Stack Overflow
}} Additional commentary at [https://stackoverflow.com/a/50490873#50490873 Fastest gap sequence for shell sort?] (23 May 2018).{{cite arXiv
|title=Optimal Gap Sequences in Shellsort for {{math|n ≤ 16}} Elements
|first=Ying Wai |last=Lee
|date=21 December 2021
|eprint=2112.11127
|class=math.CO
}}
Computational complexity
The following property holds: after h2-sorting of any h1-sorted array, the array remains h1-sorted.{{Cite journal
|last1=Gale
|first1=David
|author-link=David Gale
|last2=Karp
|first2=Richard M.
|author2-link=Richard M. Karp
|title=A Phenomenon in the Theory of Sorting
|journal=Journal of Computer and System Sciences
|volume=6
|issue=2
|date=April 1972
|pages=103–115
|doi=10.1016/S0022-0000(72)80016-3
|url=https://core.ac.uk/download/pdf/82277625.pdf
|doi-access=free
}} Every h1-sorted and h2-sorted array is also (a1h1+a2h2)-sorted, for any nonnegative integers a1 and a2. The worst-case complexity of Shellsort is therefore connected with the Frobenius problem: for given integers h1,..., hn with gcd = 1, the Frobenius number g(h1,..., hn) is the greatest integer that cannot be represented as a1h1+ ... +anhn with nonnegative integer a1,..., an. Using known formulae for Frobenius numbers, we can determine the worst-case complexity of Shellsort for several classes of gap sequences.{{Cite journal
|last=Selmer
|first=Ernst S.
|author-link=Ernst Sejersted Selmer
|title=On Shellsort and the Frobenius Problem
|journal=BIT Numerical Mathematics
|volume=29
|issue=1
|date=March 1989
|pages=37–40
|doi=10.1007/BF01932703
|hdl=1956/19572
|s2cid=32467267
|hdl-access=free
|url=https://bora.uib.no/bora-xmlui/bitstream/handle/1956/19572/On%20Shellsort%20and%20the%20Frobenius%20problem.pdf
}} Proven results are shown in the above table.
Mark Allen Weiss proved that Shellsort runs in O(N log N) time when the input array is in reverse order.{{Cite journal
|last=Weiss
|first=Mark Allen
|title=A good case for Shellsort
|journal=Congressus Numerantium
|volume=73
|date=1989
|pages=59–62
}}
With respect to the average number of operations, none of the proven results concerns a practical gap sequence. For gaps that are powers of two, Espelid computed this average as .{{Cite journal
|last=Espelid
|first=Terje O.
|title=Analysis of a Shellsort Algorithm
|journal=BIT Numerical Mathematics
|volume=13
|issue=4
|date=December 1973
|pages=394–400
|doi=10.1007/BF01933401
|s2cid=119443598
}} The quoted result is equation (8) on p. 399. Knuth determined the average complexity of sorting an N-element array with two gaps (h, 1) to be . It follows that a two-pass Shellsort with h = Θ(N1/3) makes on average O(N5/3) comparisons/inversions/running time. Yao found the average complexity of a three-pass Shellsort.{{Cite journal
|last=Yao
|first=Andrew Chi-Chih
|author-link=Andrew Yao
|title=An Analysis of (h, k, 1)-Shellsort
|journal=Journal of Algorithms
|volume=1
|issue=1
|year=1980
|pages=14–50
|doi=10.1016/0196-6774(80)90003-6
|s2cid=3054966
|url=http://pdfs.semanticscholar.org/d569/b8a70a808c6b808ca2e25371c736ce98b14f.pdf
|archive-url=https://web.archive.org/web/20190304043832/http://pdfs.semanticscholar.org/d569/b8a70a808c6b808ca2e25371c736ce98b14f.pdf
|url-status=dead
|archive-date=2019-03-04
|id=STAN-CS-79-726
}} His result was refined by Janson and Knuth:{{Cite journal
|last1=Janson
|first1=Svante |author1-link=Svante Janson
|last2=Knuth
|first2=Donald E. |author2-link=Donald Knuth
|title=Shellsort with Three Increments
|journal=Random Structures and Algorithms
|volume=10
|issue=1–2
|year=1997
|pages=125–142
|doi=10.1002/(SICI)1098-2418(199701/03)10:1/2<125::AID-RSA6>3.0.CO;2-X
|arxiv=cs/9608105
|citeseerx=10.1.1.54.9911
|url=http://www2.math.uu.se/~svante/papers/sj113.pdf
}} the average number of comparisons/inversions/running time made during a Shellsort with three gaps (ch, cg, 1), where h and g are coprime, is in the first pass, in the second pass and in the third pass. ψ(h, g) in the last formula is a complicated function asymptotically equal to . In particular, when h = Θ(N7/15) and g = Θ(N1/5), the average time of sorting is O(N23/15).
Based on experiments, it is conjectured that Shellsort with Hibbard's gap sequence runs in O(N5/4) average time, and that Gonnet and Baeza-Yates's sequence requires on average 0.41N ln N (ln ln N + 1/6) element moves. Approximations of the average number of operations formerly put forward for other sequences fail when sorted arrays contain millions of elements.
The graph below shows the average number of element comparisons use by various gap sequences, divided by the theoretical lower bound, i.e. log2N!. Ciuria's sequence 1, 4, 10, 23, 57, 132, 301, 701 (labelled Ci01) has been extended according to the formula .
Applying the theory of Kolmogorov complexity, Jiang, Li, and Vitányi
|last1=Jiang
|first1=Tao
|last2=Li
|first2=Ming
|author2-link=Ming Li
|last3=Vitányi
|first3=Paul
|author3-link=Paul Vitányi
|title=A Lower Bound on the Average-Case Complexity of Shellsort
|journal=Journal of the ACM
|volume=47
|issue=5
|date=September 2000
|pages=905–911
|doi=10.1145/355483.355488
|citeseerx=10.1.1.6.6508
|url=https://homepages.cwi.nl/~paulv/papers/shellsort.pdf
|arxiv=cs/9906008
|s2cid=3265123
}} proved the following lower bound for the order of the average number of operations/running time in a p-pass Shellsort: Ω(pN1+1/p) when p ≤ log2N and Ω(pN) when p > log2N.
Therefore, Shellsort has prospects of running in an average time that asymptotically grows like N logN only when using gap sequences whose number of gaps grows in proportion to the logarithm of the array size. It is, however, unknown whether Shellsort can reach this asymptotic order of average-case complexity, which is optimal for comparison sorts. The lower bound was improved by Vitányi{{cite journal
|doi=10.1002/rsa.20737
|last=Vitányi
|first=Paul
|author-link=Paul Vitányi
|date=March 2018
|title=On the average-case complexity of Shellsort
|journal=Random Structures and Algorithms
|volume=52
|issue=2
|pages=354–363
|arxiv=1501.06461
|s2cid=6833808
|url=https://homepages.cwi.nl/~paulv/papers/shell2015.pdf
}} for every number of passes to
\Omega ( N\sum_{k=1}^p h_{k-1}/h_k )
where . This result implies for example the Jiang-Li-Vitányi lower bound for all -pass increment sequences and improves that lower bound for particular increment sequences. In fact all bounds (lower and upper) currently known for the average case are precisely matched by this lower bound. For example, this gives the new result that the Janson-Knuth upper bound is matched by the resulting lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses comparisons/inversions/running time.
The formula allows us to search for increment sequences that yield lower bounds which are unknown; for example an increment sequence for four passes which has a lower bound greater than
for the increment sequence
. The lower bound becomes
The worst-case complexity of any version of Shellsort is of higher order: Plaxton, Poonen, and Suel showed that it grows at least as rapidly as .{{Cite book
|last1=Plaxton
|first1=C. Greg
|last2=Poonen
|first2=Bjorn
|author2-link=Bjorn Poonen
|last3=Suel
|first3=Torsten
|title=Proceedings., 33rd Annual Symposium on Foundations of Computer Science
|chapter=Improved lower bounds for Shellsort
|author3-link=Torsten Suel
|volume=33
|date=24–27 October 1992
|location=Pittsburgh, United States
|pages=226–235
|doi=10.1109/SFCS.1992.267769
|isbn=978-0-8186-2900-6
|citeseerx=10.1.1.43.1393
|s2cid=15095863
|chapter-url=http://engineering.nyu.edu/~suel/papers/shell.pdf
|last1=Plaxton
|first1=C. Greg
|last2=Suel
|first2=Torsten
|author2-link=Torsten Suel
|title=Lower Bounds for Shellsort
|journal=Journal of Algorithms
|volume=23 |issue=2
|date=May 1997
|pages=221–240
|doi=10.1006/jagm.1996.0825
|citeseerx=10.1.1.460.2429
|url=http://engineering.nyu.edu/~suel/papers/shell2.pdf
}}
Robert Cypher proved a stronger lower bound: when for all .{{Cite journal
|last=Cypher
|first=Robert
|title=A Lower Bound on the Size of Shellsort Sorting Networks
|journal=SIAM Journal on Computing
|volume=22
|date=1993
|pages=62–71
|doi=10.1137/0222006
}}
Applications
Shellsort performs more operations and has higher cache miss ratio than quicksort. However, since it can be implemented using little code and does not use the call stack, some implementations of the qsort function in the C standard library targeted at embedded systems use it instead of quicksort. Shellsort is, for example, used in the uClibc library.{{Cite web
| url=http://git.uclibc.org/uClibc/tree/libc/stdlib/stdlib.c#n700
| title=libc/stdlib/stdlib.c
| first=Manuel III |last=Novoa
| access-date=2014-10-29}} For similar reasons, in the past, Shellsort was used in the Linux kernel.{{Cite web
| url=https://github.com/torvalds/linux/blob/72932611b4b05bbd89fafa369d564ac8e449809b/kernel/groups.c#L105
| title=kernel/groups.c
| website=GitHub
| access-date=2012-05-05}}
Shellsort can also serve as a sub-algorithm of introspective sort, to sort short subarrays and to prevent a slowdown when the recursion depth exceeds a given limit. This principle is employed, for instance, in the bzip2 compressor.{{Cite web
|url=https://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/src/util/compress/bzip2/blocksort.c#L519
|title=bzip2/blocksort.c
|author=Julian Seward
|access-date=2011-03-30}}
See also
References
{{reflist|30em}}
Bibliography
- {{Cite book
|last=Knuth
|first=Donald E. |author-link=Donald Knuth
|title=The Art of Computer Programming. Volume 3: Sorting and Searching
|edition=2nd
|publisher=Addison-Wesley
|location=Reading, Massachusetts
|year=1997
|pages=83–95
|chapter=Shell's method
|isbn=978-0-201-89685-5
|title-link=The Art of Computer Programming }}
- [http://www.cs.princeton.edu/~rs/shell/ Analysis of Shellsort and Related Algorithms], Robert Sedgewick, Fourth European Symposium on Algorithms, Barcelona, September 1996.
External links
{{wikibooks|Algorithm implementation|Sorting/Shell_sort|Shell sort}}
- {{webarchive |url=https://web.archive.org/web/20150310043846/http://www.sorting-algorithms.com/shell-sort |date=10 March 2015 |title=Animated Sorting Algorithms: Shell Sort}} – graphical demonstration
- [https://www.youtube.com/watch?v=CmPA7zE8mx0 Shellsort with gaps 5, 3, 1 as a Hungarian folk dance]
{{sorting}}
{{Use dmy dates|date=April 2020}}
{{Authority control}}
{{DEFAULTSORT:Shellsort}}