autocovariance

{{Short description|Concept in probability and statistics}}

{{Correlation and covariance}}

In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question.

Auto-covariance of stochastic processes

= Definition =

With the usual notation \operatorname{E} for the expectation operator, if the stochastic process \left\{X_t\right\} has the mean function \mu_t = \operatorname{E}[X_t], then the autocovariance is given by{{cite book |first=Hwei |last=Hsu |year=1997 |title=Probability, random variables, and random processes |publisher=McGraw-Hill |isbn=978-0-07-030644-8 |url-access=registration |url=https://archive.org/details/schaumsoutlineof00hsuh }}{{rp|p. 162}}

{{Equation box 1

|indent = :

|title=

|equation = {{NumBlk||\operatorname{K}_{XX}(t_1,t_2) = \operatorname{cov}\left[X_{t_1}, X_{t_2}\right] = \operatorname{E}[(X_{t_1} - \mu_{t_1})(X_{t_2} - \mu_{t_2})] = \operatorname{E}[X_{t_1} X_{t_2}] - \mu_{t_1} \mu_{t_2}|{{EquationRef|Eq.1}}}}

|cellpadding= 6

|border

|border colour = #0073CF

|background colour=#F5FFFA}}

where t_1 and t_2 are two instances in time.

= Definition for weakly stationary process =

If \left\{X_t\right\} is a weakly stationary (WSS) process, then the following are true:{{rp|p. 163}}

:\mu_{t_1} = \mu_{t_2} \triangleq \mu for all t_1,t_2

and

:\operatorname{E}[|X_t|^2] < \infty for all t

and

:\operatorname{K}_{XX}(t_1,t_2) = \operatorname{K}_{XX}(t_2 - t_1,0) \triangleq \operatorname{K}_{XX}(t_2 - t_1) = \operatorname{K}_{XX}(\tau),

where \tau = t_2 - t_1 is the lag time, or the amount of time by which the signal has been shifted.

The autocovariance function of a WSS process is therefore given by:{{cite book |first=Amos |last=Lapidoth |year=2009 |title=A Foundation in Digital Communication |publisher=Cambridge University Press |isbn=978-0-521-19395-5}}{{rp|p. 517}}

{{Equation box 1

|indent = :

|title=

|equation = {{NumBlk||\operatorname{K}_{XX}(\tau) = \operatorname{E}[(X_t - \mu_t)(X_{t- \tau} - \mu_{t- \tau})] = \operatorname{E}[X_t X_{t-\tau}] - \mu_t \mu_{t-\tau}|{{EquationRef|Eq.2}}}}

|cellpadding= 6

|border

|border colour = #0073CF

|background colour=#F5FFFA}}

which is equivalent to

:\operatorname{K}_{XX}(\tau) = \operatorname{E}[(X_{t+ \tau} - \mu_{t +\tau})(X_{t} - \mu_{t})] = \operatorname{E}[X_{t+\tau} X_t] - \mu^2 .

= Normalization =

It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.

The definition of the normalized auto-correlation of a stochastic process is

:\rho_{XX}(t_1,t_2) = \frac{\operatorname{K}_{XX}(t_1,t_2)}{\sigma_{t_1}\sigma_{t_2}} = \frac{\operatorname{E}[(X_{t_1} - \mu_{t_1})(X_{t_2} - \mu_{t_2})]}{\sigma_{t_1}\sigma_{t_2}}.

If the function \rho_{XX} is well-defined, its value must lie in the range [-1,1], with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.

For a WSS process, the definition is

:\rho_{XX}(\tau) = \frac{\operatorname{K}_{XX}(\tau)}{\sigma^2} = \frac{\operatorname{E}[(X_t - \mu)(X_{t+\tau} - \mu)]}{\sigma^2}.

where

:\operatorname{K}_{XX}(0) = \sigma^2.

=Properties=

==Symmetry property==

:\operatorname{K}_{XX}(t_1,t_2) = \overline{\operatorname{K}_{XX}(t_2,t_1)}Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3{{rp|p.169}}

respectively for a WSS process:

:\operatorname{K}_{XX}(\tau) = \overline{\operatorname{K}_{XX}(-\tau)}{{rp|p.173}}

==Linear filtering==

The autocovariance of a linearly filtered process \left\{Y_t\right\}

:Y_t = \sum_{k=-\infty}^\infty a_k X_{t+k}\,

is

:K_{YY}(\tau) = \sum_{k,l=-\infty}^\infty a_k a_l K_{XX}(\tau+k-l).\,

Calculating turbulent diffusivity

Autocovariance can be used to calculate turbulent diffusivity.{{Cite journal|last=Taylor|first=G. I.|date=1922-01-01|title=Diffusion by Continuous Movements|journal=Proceedings of the London Mathematical Society|language=en|volume=s2-20|issue=1|pages=196–212|doi=10.1112/plms/s2-20.1.196|bibcode=1922PLMS..220S.196T |issn=1460-244X|url=https://zenodo.org/record/1433523}} Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations{{Citation needed|date=September 2020}}.

Reynolds decomposition is used to define the velocity fluctuations u'(x,t) (assume we are now working with 1D problem and U(x,t) is the velocity along x direction):

:U(x,t) = \langle U(x,t) \rangle + u'(x,t),

where U(x,t) is the true velocity, and \langle U(x,t) \rangle is the expected value of velocity. If we choose a correct \langle U(x,t) \rangle, all of the stochastic components of the turbulent velocity will be included in u'(x,t). To determine \langle U(x,t) \rangle, a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required.

If we assume the turbulent flux \langle u'c' \rangle (c' = c - \langle c \rangle, and c is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term:

:J_{\text{turbulence}_x} = \langle u'c' \rangle \approx D_{T_x} \frac{\partial \langle c \rangle}{\partial x}.

The velocity autocovariance is defined as

:K_{XX} \equiv \langle u'(t_0) u'(t_0 + \tau)\rangle or K_{XX} \equiv \langle u'(x_0) u'(x_0 + r)\rangle,

where \tau is the lag time, and r is the lag distance.

The turbulent diffusivity D_{T_x} can be calculated using the following 3 methods:

{{numbered list

|If we have velocity data along a Lagrangian trajectory:

:D_{T_x} = \int_\tau^\infty u'(t_0) u'(t_0 + \tau) \,d\tau.

|If we have velocity data at one fixed (Eulerian) location{{Citation needed|date=September 2020}}:

:D_{T_x} \approx [0.3 \pm 0.1] \left[\frac{\langle u'u' \rangle + \langle u \rangle^2}{\langle u'u' \rangle}\right] \int_\tau^\infty u'(t_0) u'(t_0 + \tau) \,d\tau.

|If we have velocity information at two fixed (Eulerian) locations{{Citation needed|date=September 2020}}:

:D_{T_x} \approx [0.4 \pm 0.1] \left[\frac{1}{\langle u'u' \rangle}\right] \int_r^\infty u'(x_0) u'(x_0 + r) \,dr,

where r is the distance separated by these two fixed locations.

}}

Auto-covariance of random vectors

{{main|Auto-covariance matrix}}

See also

References

{{Reflist}}

Further reading

  • {{cite book |first=P. G. |last=Hoel |title=Mathematical Statistics |publisher=Wiley |location=New York |year=1984 |edition=Fifth |isbn=978-0-471-89045-4 }}
  • [https://web.archive.org/web/20060428122150/http://w3eos.whoi.edu/12.747/notes/lect06/l06s02.html Lecture notes on autocovariance from WHOI]

Category:Fourier analysis

Category:Autocorrelation