Fine-grained reduction

In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems.

Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine.

If problem A can be solved in time a(n) and problem B can be solved in time b(n), then the existence of an (a,b)-reduction from problem A to problem B implies that any significant speedup for problem B would also lead to a speedup for problem A.

Definition

Let A and B be computational problems, specified as the desired output for each possible input.

Let a and b both be time-constructible functions that take an integer argument n and produce an integer result. Usually, a and b are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as n^2.{{r|w15}}

Then A is said to be (a,b)-reducible to B

if, for every real number \epsilon>0, there exists a real number \delta>0 and an algorithm that solves instances of problem A by transforming it into a sequence of instances of problem B, taking time O\bigl(a(n)^{1-\delta}\bigr) for the transformation on instances of size n, and producing a sequence of instances whose sizes n_i are bounded by \sum_i b(n_i)^{1-\epsilon}.{{r|w15}}

An (a,b)-reduction is given by the mapping from \epsilon to the pair of an algorithm and \delta.{{r|w15}}

Speedup implication

Suppose A is (a,b)-reducible to B, and there exists \epsilon>0 such that B can be solved in time O\bigl(b(n)^{1-\epsilon}\bigr).

Then, with these assumptions, there also exists \delta>0 such that A can be solved in time O\bigl(a(n)^{1-\delta}\bigr). Namely, let \delta be the value given by the (a,b)-reduction, and solve A by applying the transformation of the reduction and using the fast algorithm for B for each resulting subproblem.{{r|w15}}

Equivalently, if A cannot be solved in time significantly faster than a(n), then B cannot be solved in time significantly faster than b(n).{{r|w15}}

History

Fine-grained reductions were defined, in the special case that a and b are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010.

They also showed the existence of (n^3,n^3)-reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.{{r|ww10}}

The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.{{r|w15}}

Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.{{r|cgimps}}

References

{{reflist|refs=

{{citation

| last1 = Carmosino | first1 = Marco L.

| last2 = Gao | first2 = Jiawei

| last3 = Impagliazzo | first3 = Russell | author3-link = Russell Impagliazzo

| last4 = Mihajlin | first4 = Ivan

| last5 = Paturi | first5 = Ramamohan

| last6 = Schneider | first6 = Stefan

| contribution = Nondeterministic extensions of the strong exponential time hypothesis and consequences for non-reducibility

| mr = 3629829

| pages = 261–270

| publisher = ACM, New York

| title = ITCS'16—Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science

| year = 2016}}

{{citation

| last1 = Williams | first1 = Virginia Vassilevska | author1-link = Virginia Vassilevska Williams

| last2 = Williams | first2 = R. Ryan | author2-link = Ryan Williams (computer scientist)

| doi = 10.1145/3186893

| issue = 5

| journal = Journal of the ACM

| mr = 3856539

| pages = A27:1–A27:38

| title = Subcubic equivalences between path, matrix, and triangle problems

| volume = 65

| year = 2018| hdl = 1721.1/134750

| hdl-access = free

}}. A preliminary version of these results, including the definition of a "subcubic reduction", a special case of a fine-grained reduction, was presented at the 2010 Symposium on Foundations of Computer Science.

{{citation

| last = Williams | first = Virginia V. | authorlink = Virginia Vassilevska Williams

| contribution = Hardness of easy problems: basing hardness on popular conjectures such as the Strong Exponential Time Hypothesis

| mr = 3452406

| pages = 17–29

| publisher = Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern

| series = LIPIcs. Leibniz Int. Proc. Inform.

| title = 10th International Symposium on Parameterized and Exact Computation

| volume = 43

| year = 2015}}

}}

Category:Reduction (complexity)