Q-function#Inverse Q

{{Short description|Statistics function}}

{{For|the phase-space function representing a quantum state|Husimi Q representation}}

Image:Q-function.png

In statistics, the Q-function is the tail distribution function of the standard normal distribution.{{cite web|url=http://cnx.org/content/m11537/latest/|title=The Q-function|website=cnx.org|archive-url=https://web.archive.org/web/20120229030808/http://cnx.org/content/m11537/latest/|archive-date=2012-02-29}}{{cite web|url=http://www.eng.tau.ac.il/~jo/academic/Q.pdf|title=Basic properties of the Q-function|archive-url=https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf|archive-date=2009-03-25 |date=2009-03-05 }} In other words, Q(x) is the probability that a normal (Gaussian) random variable will obtain a value larger than x standard deviations. Equivalently, Q(x) is the probability that a standard normal random variable takes a value larger than x.

If Y is a Gaussian random variable with mean \mu and variance \sigma^2, then X = \frac{Y-\mu}{\sigma} is standard normal and

:P(Y > y) = P(X > x) = Q(x)

where x = \frac{y-\mu}{\sigma}.

Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.[http://mathworld.wolfram.com/NormalDistributionFunction.html Normal Distribution Function – from Wolfram MathWorld]

Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.

Definition and basic properties

Formally, the Q-function is defined as

:Q(x) = \frac{1}{\sqrt{2\pi}} \int_x^\infty \exp\left(-\frac{u^2}{2}\right) \, du.

Thus,

:Q(x) = 1 - Q(-x) = 1 - \Phi(x)\,\!,

where \Phi(x) is the cumulative distribution function of the standard normal Gaussian distribution.

The Q-function can be expressed in terms of the error function, or the complementary error function, as

:

\begin{align}

Q(x) &=\frac{1}{2}\left( \frac{2}{\sqrt{\pi}} \int_{x/\sqrt{2}}^\infty \exp\left(-t^2\right) \, dt \right)\\

&= \frac{1}{2} - \frac{1}{2} \operatorname{erf} \left( \frac{x}{\sqrt{2}} \right) ~~\text{ -or-}\\

&= \frac{1}{2}\operatorname{erfc} \left(\frac{x}{\sqrt{2}} \right).

\end{align}

An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:{{cite book |doi=10.1109/MILCOM.1991.258319 |chapter-url=http://wsl.stanford.edu/~ee359/craig.pdf|chapter=A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations|title=MILCOM 91 - Conference record|pages=571–575|year=1991|last1=Craig|first1=J.W.|isbn=0-87942-691-8|s2cid=16034807}}

:Q(x) = \frac{1}{\pi} \int_0^{\frac{\pi}{2}} \exp \left( - \frac{x^2}{2 \sin^2 \theta} \right) d\theta.

This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.

Craig's formula was later extended by Behnad (2020){{cite journal |doi=10.1109/TCOMM.2020.2986209 |title=A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis|journal=IEEE Transactions on Communications |volume=68|issue=7|pages=4117–4125|year=2020|last1=Behnad|first1=Aydin|s2cid=216500014}} for the Q-function of the sum of two non-negative variables, as follows:

:File:Q function complex plot plotted with Mathematica 13.1 ComplexPlot3D.svgQ(x+y) = \frac{1}{\pi} \int_0^{\frac{\pi}{2}} \exp \left( - \frac{x^2}{2 \sin^2 \theta} - \frac{y^2}{2 \cos^2 \theta} \right) d\theta, \quad x,y \geqslant 0 .

Bounds and approximations

  • The Q-function is not an elementary function. However, it can be upper and lower bounded as,{{Cite journal |doi = 10.1214/aoms/1177731721|title = Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument| journal = Ann. Math. Stat.|volume = 12|issue = 3|pages = 364–366|year = 1941|last = Gordon|first = R.D.}}{{Cite journal |doi = 10.1109/TCOM.1979.1094433|title = Simple Approximations of the Error Function Q(x) for Communications Applications|journal = IEEE Transactions on Communications|volume = 27|issue = 3|pages = 639–643|year = 1979|last1 = Borjesson|first1 = P.|last2 = Sundberg|first2 = C.-E.}}

::\left (\frac{x}{1+x^2} \right ) \phi(x) < Q(x) < \frac{\phi(x)}{x}, \qquad x>0,

:where \phi(x) is the density function of the standard normal distribution, and the bounds become increasingly tight for large x.

:Using the substitution v =u2/2, the upper bound is derived as follows:

::Q(x) =\int_x^\infty\phi(u)\,du <\int_x^\infty\frac ux\phi(u)\,du =\int_{\frac{x^2}{2}}^\infty\frac{e^{-v}}{x\sqrt{2\pi}}\,dv=-\biggl.\frac{e^{-v}}{x\sqrt{2\pi}}\biggr|_{\frac{x^2}{2}}^\infty=\frac{\phi(x)}{x}.

:Similarly, using \phi'(u) = - u \phi(u) and the quotient rule,

::\left(1+\frac1{x^2}\right)Q(x) =\int_x^\infty \left(1+\frac1{x^2}\right)\phi(u)\,du >\int_x^\infty \left(1+\frac1{u^2}\right)\phi(u)\,du =-\biggl.\frac{\phi(u)}u\biggr|_x^\infty

=\frac{\phi(x)}x.

:Solving for Q(x) provides the lower bound.

:The geometric mean of the upper and lower bound gives a suitable approximation for Q(x):

::Q(x) \approx \frac{\phi(x)}{\sqrt{1 + x^2}}, \qquad x \geq 0.

  • Tighter bounds and approximations of Q(x) can also be obtained by optimizing the following expression

:: \tilde{Q}(x) = \frac{\phi(x)}{(1-a)x + a\sqrt{x^2 + b}}.

:For x \geq 0, the best upper bound is given by a = 0.344 and b = 5.334 with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by a = 0.339 and b = 5.510 with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by a = 1/\pi and b = 2 \pi with maximum absolute relative error of 1.17%.

::Q(x)\leq e^{-\frac{x^2}{2}}, \qquad x>0

  • Improved exponential bounds and a pure exponential approximation are {{cite journal |url=http://campus.unibo.it/85943/1/mcddmsTranWIR2003.pdf |doi=10.1109/TWC.2003.814350|title=New exponential bounds and approximations for the computation of error probability in fading channels|journal=IEEE Transactions on Wireless Communications|volume=24|issue=5|pages=840–845|year=2003|last1=Chiani|first1=M.|last2=Dardari|first2=D.|last3=Simon|first3=M.K.}}

::Q(x)\leq \tfrac{1}{4}e^{-x^2}+\tfrac{1}{4}e^{-\frac{x^2}{2}} \leq \tfrac{1}{2}e^{-\frac{x^2}{2}}, \qquad x>0

:: Q(x)\approx \frac{1}{12}e^{-\frac{x^2}{2}}+\frac{1}{4}e^{-\frac{2}{3} x^2}, \qquad x>0

  • The above were generalized by Tanash & Riihonen (2020),{{cite journal |doi=10.1109/TCOMM.2020.3006902|title=Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials|journal=IEEE Transactions on Communications|year=2020|last1=Tanash|first1=I.M.|last2=Riihonen|first2=T.|volume=68|issue=10|pages=6514–6524|arxiv=2007.06939|s2cid=220514754}} who showed that Q(x) can be accurately approximated or bounded by

::\tilde{Q}(x) = \sum_{n=1}^N a_n e^{-b_n x^2}.

:In particular, they presented a systematic methodology to solve the numerical coefficients \{(a_n,b_n)\}_{n=1}^N that yield a minimax approximation or bound: Q(x) \approx \tilde{Q}(x), Q(x) \leq \tilde{Q}(x), or Q(x) \geq \tilde{Q}(x) for x\geq0. With the example coefficients tabulated in the paper for N = 20, the relative and absolute approximation errors are less than 2.831 \cdot 10^{-6} and 1.416 \cdot 10^{-6}, respectively. The coefficients \{(a_n,b_n)\}_{n=1}^N for many variations of the exponential approximations and bounds up to N = 25 have been released to open access as a comprehensive dataset.{{cite journal |doi=10.5281/zenodo.4112978|title=Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set]|url=https://zenodo.org/record/4112978|website=Zenodo|year=2020|last1=Tanash|first1=I.M.|last2=Riihonen|first2=T.}}

  • Another approximation of Q(x) for x \in [0,\infty) is given by Karagiannidis & Lioumpas (2007){{cite journal |doi=10.1109/LCOMM.2007.070470 |url=http://users.auth.gr/users/9/3/028239/public_html/pdf/Q_Approxim.pdf|title=An Improved Approximation for the Gaussian Q-Function|journal=IEEE Communications Letters|volume=11|issue=8|pages=644–646|year=2007|last1=Karagiannidis|first1=George|last2=Lioumpas|first2=Athanasios|s2cid=4043576}} who showed for the appropriate choice of parameters \{A, B\} that

:: f(x; A, B) = \frac{\left(1 - e^{-Ax}\right)e^{-x^2}}{B\sqrt{\pi} x} \approx \operatorname{erfc} \left(x\right).

: The absolute error between f(x; A, B) and \operatorname{erfc}(x) over the range [0, R] is minimized by evaluating

:: \{A, B\} = \underset{\{A,B\}}{\arg \min} \frac{1}{R} \int_0^R | f(x; A, B) - \operatorname{erfc}(x) |dx.

: Using R = 20 and numerically integrating, they found the minimum error occurred when \{A, B\} = \{1.98, 1.135\}, which gave a good approximation for \forall x \ge 0.

: Substituting these values and using the relationship between Q(x) and \operatorname{erfc}(x) from above gives

:: Q(x)\approx\frac{\left( 1-e^{\frac{-1.98x} {\sqrt{2}}}\right) e^{-\frac{x^{2}}{2}}}{1.135\sqrt{2\pi}x}, x \ge 0.

: Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.{{cite journal |doi=10.1109/LCOMM.2021.3052257|title=Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function|journal=IEEE Communications Letters|year=2021|last1=Tanash|first1=I.M.|last2=Riihonen|first2=T.|volume=25|issue=5|pages=1468–1471|arxiv=2101.07631|s2cid=231639206}}

  • A tighter and more tractable approximation of Q(x) for positive arguments x \in [0,\infty) is given by López-Benítez & Casadevall (2011){{cite journal |doi=10.1109/TCOMM.2011.012711.100105 |url=http://www.lopezbenitez.es/journals/IEEE_TCOM_2011.pdf|title=Versatile, Accurate, and Analytically Tractable Approximation for the Gaussian Q-Function|journal=IEEE Transactions on Communications|volume=59|issue=4|pages=917–922|year=2011|last1=Lopez-Benitez|first1=Miguel|last2=Casadevall|first2=Fernando|s2cid=1145101}} based on a second-order exponential function:

:: Q(x) \approx e^{-ax^2-bx-c}, \qquad x \ge 0.

: The fitting coefficients (a,b,c) can be optimized over any desired range of arguments in order to minimize the sum of square errors (a = 0.3842, b = 0.7640, c = 0.6964 for x \in [0,20]) or minimize the maximum absolute error (a = 0.4920, b = 0.2887, c = 1.1893 for x \in [0,20]). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of Q(x) is trivial and does not alter the algebraic form of the approximation).

  • A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments x \in [0, \infty) was introduced by Abreu (2012){{cite journal |doi=10.1109/TCOMM.2012.080612.110075 |title=Very Simple Tight Bounds on the Q-Function |journal=IEEE Transactions on Communications |volume=60 |issue=9 |pages=2415–2420 |year=2012 |last=Abreu |first=Giuseppe}} based on a simple algebraic expression with only two exponential terms:

:: Q(x) \geq \frac{1}{12} e^{-x^2} + \frac{1}{\sqrt{2\pi} (x + 1)} e^{-x^2 / 2}, \qquad x \geq 0,

:: Q(x) \leq \frac{1}{50} e^{-x^2} + \frac{1}{2 (x + 1)} e^{-x^2 / 2}, \qquad x \geq 0.

These bounds are derived from a unified form Q_{\mathrm{B}}(x; a, b) = \frac{\exp(-x^2)}{a} + \frac{\exp(-x^2 / 2)}{b (x + 1)}, where the parameters a and b are chosen to satisfy specific conditions ensuring the lower (a_{\mathrm{L}} = 12, b_{\mathrm{L}} = \sqrt{2\pi}) and upper (a_{\mathrm{U}} = 50, b_{\mathrm{U}} = 2) bounding properties. The resulting expressions are notable for their simplicity and tightness, offering a favorable trade-off between accuracy and mathematical tractability. These bounds are particularly useful in theoretical analysis, such as in communication theory over fading channels. Additionally, they can be extended to bound Q^n(x) for positive integers n using the binomial theorem, maintaining their simplicity and effectiveness.

Inverse ''Q''

The inverse Q-function can be related to the inverse error functions:

:Q^{-1}(y) = \sqrt{2}\ \mathrm{erf}^{-1}(1-2y) = \sqrt{2}\ \mathrm{erfc}^{-1}(2y)

The function Q^{-1}(y) finds application in digital communications. It is usually expressed in dB and generally called Q-factor:

:\mathrm{Q\text{-}factor} = 20 \log_{10}\!\left(Q^{-1}(y)\right)\!~\mathrm{dB}

where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.

File:Q-factor vs BER.png

Values

The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.

{{col-begin}}

{{col-4}}

class="wikitable"

! scope="row" | Q(0.0)

| 0.500000000

1/2.0000
scope="row" | Q(0.1)

| 0.460172163 || 1/2.1731

scope="row" | Q(0.2)

| 0.420740291 || 1/2.3768

scope="row" | Q(0.3)

| 0.382088578 || 1/2.6172

scope="row" | Q(0.4)

| 0.344578258 || 1/2.9021

scope="row" | Q(0.5)

| 0.308537539 || 1/3.2411

scope="row" | Q(0.6)

| 0.274253118 || 1/3.6463

scope="row" | Q(0.7)

| 0.241963652 || 1/4.1329

scope="row" | Q(0.8)

| 0.211855399 || 1/4.7202

scope="row" | Q(0.9)

| 0.184060125 || 1/5.4330

{{col-4}}

class="wikitable"

! scope="row" | Q(1.0)

| 0.158655254

1/6.3030
scope="row" | Q(1.1)

| 0.135666061 || 1/7.3710

scope="row" | Q(1.2)

| 0.115069670 || 1/8.6904

scope="row" | Q(1.3)

| 0.096800485 || 1/10.3305

scope="row" | Q(1.4)

| 0.080756659 || 1/12.3829

scope="row" | Q(1.5)

| 0.066807201 || 1/14.9684

scope="row" | Q(1.6)

| 0.054799292 || 1/18.2484

scope="row" | Q(1.7)

| 0.044565463 || 1/22.4389

scope="row" | Q(1.8)

| 0.035930319 || 1/27.8316

scope="row" | Q(1.9)

| 0.028716560 || 1/34.8231

{{col-4}}

class="wikitable"

! scope="row" | Q(2.0)

| 0.022750132

1/43.9558
scope="row" | Q(2.1)

| 0.017864421 || 1/55.9772

scope="row" | Q(2.2)

| 0.013903448 || 1/71.9246

scope="row" | Q(2.3)

| 0.010724110 || 1/93.2478

scope="row" | Q(2.4)

| 0.008197536 || 1/121.9879

scope="row" | Q(2.5)

| 0.006209665 || 1/161.0393

scope="row" | Q(2.6)

| 0.004661188 || 1/214.5376

scope="row" | Q(2.7)

| 0.003466974 || 1/288.4360

scope="row" | Q(2.8)

| 0.002555130 || 1/391.3695

scope="row" | Q(2.9)

| 0.001865813 || 1/535.9593

{{col-4}}

class="wikitable"

! scope="row" | Q(3.0)

| 0.001349898

1/740.7967
scope="row" | Q(3.1)

| 0.000967603 || 1/1033.4815

scope="row" | Q(3.2)

| 0.000687138 || 1/1455.3119

scope="row" | Q(3.3)

| 0.000483424 || 1/2068.5769

scope="row" | Q(3.4)

| 0.000336929 || 1/2967.9820

scope="row" | Q(3.5)

| 0.000232629 || 1/4298.6887

scope="row" | Q(3.6)

| 0.000159109 || 1/6285.0158

scope="row" | Q(3.7)

| 0.000107800 || 1/9276.4608

scope="row" | Q(3.8)

| 0.000072348 || 1/13822.0738

scope="row" | Q(3.9)

| 0.000048096 || 1/20791.6011

scope="row" | Q(4.0)

| 0.000031671 || 1/31574.3855

{{col-end}}

Generalization to high dimensions

The Q-function can be generalized to higher dimensions:{{cite journal|last1=Savage|first1=I. R.|title=Mills ratio for multivariate normal distributions|journal=Journal of Research of the National Bureau of Standards Section B|date=1962|volume=66|issue=3|pages=93–96|doi=10.6028/jres.066B.011|zbl=0105.12601|doi-access=free}}

:Q(\mathbf{x})= \mathbb{P}(\mathbf{X}\geq \mathbf{x}),

where \mathbf{X}\sim \mathcal{N}(\mathbf{0},\, \Sigma) follows the multivariate normal distribution with covariance \Sigma and the threshold is of the form

\mathbf{x}=\gamma\Sigma\mathbf{l}^* for some positive vector \mathbf{l}^*>\mathbf{0} and positive constant \gamma>0. As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be [http://www.mathworks.com/matlabcentral/fileexchange/53796 approximated arbitrarily well] as \gamma becomes larger and larger.{{cite journal|last1=Botev|first1=Z. I.|title=The normal law under linear restrictions: simulation and estimation via minimax tilting|journal=Journal of the Royal Statistical Society, Series B|volume=79|pages=125–148|date=2016|doi=10.1111/rssb.12162|arxiv=1603.04166|bibcode=2016arXiv160304166B|s2cid=88515228}}{{cite book |chapter=Logarithmically efficient estimation of the tail of the multivariate normal distribution |last1=Botev |first1=Z. I. |last2=Mackinlay |first2=D. |last3=Chen |first3=Y.-L. |date=2017 |publisher=IEEE |isbn=978-1-5386-3428-8 |title= 2017 Winter Simulation Conference (WSC)|pages=1903–191 |doi= 10.1109/WSC.2017.8247926 |s2cid=4626481 }}

References