antithetic variates

{{Short description|Monte Carlo method}}

In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-over square root convergence, a very large number of sample paths is required to obtain an accurate result. The antithetic variates method reduces the variance of the simulation results.{{cite journal|last1=Botev|first1=Z.|last2=Ridder|first2=A.|title=Variance Reduction|journal= Wiley StatsRef: Statistics Reference Online|date=2017|pages=1–6|doi=10.1002/9781118445112.stat07975|isbn=9781118445112|hdl=1959.4/unsworks_50616|hdl-access=free}}{{cite book|last1=Kroese|first1=D. P.|authorlink1=Dirk Kroese |last2=Taimre|first2=T.|last3=Botev|first3=Z. I.|title=Handbook of Monte Carlo methods|year=2011 |publisher=John Wiley & Sons}}(Chapter 9.3)

Underlying principle

The antithetic variates technique consists, for every sample path obtained, in taking its antithetic path — that is given a path \{\varepsilon_1,\dots,\varepsilon_M\} to also take \{-\varepsilon_1,\dots,-\varepsilon_M\}. The advantage of this technique is twofold: it reduces the number of normal samples to be taken to generate N paths, and it reduces the variance of the sample paths, improving the precision.

Suppose that we would like to estimate

:\theta = \mathrm{E}( h(X) ) = \mathrm{E}( Y ) \,

For that we have generated two samples

:Y_1\text{ and }Y_2 \,

An unbiased estimate of {\theta} is given by

:\hat \theta = \frac{Y_1 + Y_2}{2}.

And

:\text{Var}(\hat \theta) = \frac{\text{Var}(Y_1) + \text{Var}(Y_2) + 2\text{Cov}(Y_1,Y_2)}{4}

so variance is reduced if \text{Cov}(Y_1,Y_2) is negative.

Example 1

If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be u_1, \ldots, u_n, where, for any given i, u_i is obtained from U(0, 1). The second sample is built from u'_1, \ldots, u'_n, where, for any given i: u'_i = 1-u_i. If the set u_i is uniform along [0, 1], so are u'_i. Furthermore, covariance is negative, allowing for initial variance reduction.

Example 2: integral calculation

We would like to estimate

:I = \int_0^1 \frac{1}{1+x} \, \mathrm{d}x.

The exact result is I=\ln 2 \approx 0.69314718. This integral can be seen as the expected value of f(U), where

:f(x) = \frac{1}{1+x}

and U follows a uniform distribution [0, 1].

The following table compares the classical Monte Carlo estimate (sample size: 2n, where n = 1500) to the antithetic variates estimate (sample size: n, completed with the transformed sample 1 − ui):

:

cellspacing="1" border="1"

|

| align="right" | Estimate

| align="right" | standard error

Classical Estimate

| align="right" | 0.69365

| align="right" | 0.00255

Antithetic Variates

| align="right" | 0.69399

| align="right" | 0.00063

The use of the antithetic variates method to estimate the result shows an important variance reduction.

See also

References