Horvitz–Thompson estimator

{{Short description|Statistical estimation method}}

In statistics, the Horvitz–Thompson estimator, named after Daniel G. Horvitz and Donovan J. Thompson,Horvitz, D. G.; Thompson, D. J. (1952) "A generalization of sampling without replacement from a finite universe", Journal of the American Statistical Association, 47, 663–685, . {{JSTOR|2280784}} is a method for estimating the totalWilliam G. Cochran (1977), Sampling Techniques, 3rd Edition, Wiley. {{ISBN|0-471-16240-X}} and mean of a pseudo-population in a stratified sample by applying inverse probability weighting to account for the difference in the sampling distribution between the collected data and the target population. The Horvitz–Thompson estimator is frequently applied in survey analyses and can be used to account for missing data, as well as many sources of unequal selection probabilities.

The method

Formally, let Y_i, i = 1, 2, \ldots, n be an independent sample from n of N \ge n distinct strata with an overall mean \mu. Suppose further that \pi_i is the inclusion probability that a randomly sampled individual in a superpopulation belongs to the ith stratum. The Horvitz–Thompson estimator of the total is given by:{{cite book

|title = Model Assisted Survey Sampling

|isbn= 9780387975283

|year = 1992

|last1= Särndal

|first1= Carl-Erik

|last2= Swensson

|first2= Bengt

|last3= Wretman

|first3= Jan Hȧkan

}}{{rp|51}}

:

\hat{Y}_\mathrm{HT} = \sum_{i=1}^n \frac{Y_i}{\pi_i},

and the Horvitz–Thompson estimate of the mean is given by:

:

\hat{\mu}_\mathrm{HT} = \frac1N\hat{Y}_{HT} = \frac1N\sum_{i=1}^n \frac{Y_i}{\pi_i}.

In a Bayesian probabilistic framework \pi_i is considered the proportion of individuals in a target population belonging to the ith stratum. Hence, Y_i/\pi_i could be thought of as an estimate of the complete sample of persons within the ith stratum. The Horvitz–Thompson estimator can also be expressed as the limit of a weighted bootstrap resampling estimate of the mean. It can also be viewed as a special case of multiple imputation approaches.Roderick J.A. Little, Donald B. Rubin (2002) Statistical Analysis With Missing Data, 2nd ed., Wiley. {{ISBN|0-471-18386-5}}

For post-stratified study designs, estimation of \pi and \mu are done in distinct steps. In such cases, computating the variance of \hat{\mu}_{HT} is not straightforward. Resampling techniques such as the bootstrap or the jackknife can be applied to gain consistent estimates of the variance of the Horvitz–Thompson estimator.{{cite journal |last=Quatember |first=A. |title=The Finite Population Bootstrap - from the Maximum Likelihood to the Horvitz-Thompson Approach |journal=Austrian Journal of Statistics |date=2014 |volume=43 |issue=2 |pages=93–102 |doi=10.17713/ajs.v43i2.10 |doi-access=free }} The "survey" package for R conducts analyses for post-stratified data using the Horvitz–Thompson estimator.{{cite web |url=https://cran.r-project.org/web/packages/survey/ |title = CRAN - Package survey| date=19 July 2021 }}

Proof of Horvitz–Thompson unbiased estimation of the mean

For this proof it will be useful to represent the sample as a random subset S\subseteq\{1,\ldots,N\} of size n. We can then define indicator random variables I_j = \mathbf{1}[j \in S] representing whether for each j in \{1,\ldots,N\} whether it is present in the sample. Note that for any observation in the sample, the expectation is the definition of the inclusion probability:

\pi_i = \operatorname{\mathbb E}\left(I_i\right) = \Pr(i\in S)

.

{{efn|Technically, the indexing scheme in the proof is different from the indexing in the description of the estimator. In the proof, Y_j is the jth value in a global ordering out of N strata. In the description, Y_i is the ith value in the sample, out of n. To unify these two, we could explicitly define a function mapping sample-indices to global indices.}}

Taking the expectation of the estimator we can prove it is unbiased as follows:

:

\begin{align}

\operatorname{\mathbb E}\left(\hat{\mu}_\mathrm{HT}\right) &= \operatorname{\mathbb E}\left(\frac{1}{N}\sum_{i\in S} \frac{Y_i}{\pi_i}\right)\\[6pt]

&=\operatorname{\mathbb E}\left(\frac{1}{N} \sum_{j=1}^N \frac{Y_j}{\pi_j}I_j\right) \\[6pt]

&= \frac{1}{N} \sum_{j=1}^N \frac{Y_j}{\pi_j}\operatorname{\mathbb E}\left(I_j \right)\\&= \frac{1}{N}\sum_{j=1}^N \frac{Y_j}{\pi_j}\pi_j \\[6pt]

&= \frac{1}{N}\sum_{j=1}^N Y_i

\end{align}

The Hansen–Hurwitz (1943) is known to be inferior to the Horvitz–Thompson (1952) strategy, associated with a number of Inclusion Probabilities Proportional to Size (IPPS) sampling procedures.PRABHU-AJGAONKAR, S. G. "Comparison of the Horvitz–Thompson Strategy with the Hansen–Hurwitz Strategy." Survey Methodology (1987): 221. [https://www150.statcan.gc.ca/n1/en/pub/12-001-x/1987002/article/14609-eng.pdf?st=mgQEBG-Z (pdf)]

Notes

{{notelist}}

References

{{reflist}}