empirical likelihood

{{short description|Method of estimating statistical parameters}}

In probability theory and statistics, empirical likelihood (EL) is a nonparametric method for estimating the parameters of statistical models. It requires fewer assumptions about the error distribution while retaining some of the merits in likelihood-based inference. The estimation method requires that the data are independent and identically distributed (iid). It performs well even when the distribution is asymmetric or censored.{{Cite book|last=Owen|first=Art B.|url=https://www.worldcat.org/oclc/71012491|title=Empirical likelihood|date=2001|isbn=978-1-4200-3615-2|location=Boca Raton, Fla.|oclc=71012491}} EL methods can also handle constraints and prior information on parameters. Art Owen pioneered work in this area with his 1988 paper.{{Cite journal|last=Owen|first=Art B.|date=1988|title=Empirical likelihood ratio confidence intervals for a single functional|url=http://dx.doi.org/10.1093/biomet/75.2.237|journal=Biometrika|volume=75|issue=2|pages=237–249|doi=10.1093/biomet/75.2.237|issn=0006-3444|url-access=subscription}}

Definition

Given a set of n i.i.d. realizations y_i of random variables Y_i, then the empirical distribution function is \hat{F}(y):=\sum_{i=1}^n \pi_i I(Y_i, with the indicator function I and the (normalized) weights \pi_i.

Then, the empirical likelihood is:\frac{\hat{F}(y_i)-\hat{F}(y_i-\delta y)}{\delta y} is an estimate of the probability density, compare histogram

:L:=\prod_{i=1}^n \frac{\hat{F}(y_i)-\hat{F}(y_i-\delta y)}{\delta y},

where \delta y is a small number (potentially the difference to the next smaller sample).

Empirical likelihood estimation can be augmented with side information by using further constraints (similar to the generalized estimating equations approach) for the empirical distribution function.

E.g. a constraint like the following can be incorporated using a Lagrange multiplier E[h(Y;\theta)]=\int_{-\infty}^\infty h(y;\theta) dF=0 which implies \hat{E}[h(y;\theta)]=\sum_{i=1}^n h(y_i;\theta)\pi_i=0.

With similar constraints, we could also model correlation.

= Discrete random variables =

The empirical-likelihood method can also be also employed for discrete distributions.{{Cite journal|last1=Wang|first1=Dong|last2=Chen|first2=Song Xi|date=2009-02-01|title=Empirical likelihood for estimating equations with missing values|url=http://dx.doi.org/10.1214/07-aos585|journal=The Annals of Statistics|volume=37|issue=1|doi=10.1214/07-aos585|s2cid=5427751 |issn=0090-5364|arxiv=0903.0726}}

Given \ p_{i}:=\hat{F}(y_i)-\hat{F}(y_i-\delta y),\ i = 1,...,n such that

p_i \geq 0 \text{ and } \sum_{i=1}^n\ p_{i} =1.

Then the empirical likelihood is again L(p_{1},...,p_{n})= \prod_{i=1}^n \ p_{i}.

Using the Lagrangian multiplier method to maximize the logarithm of the empirical likelihood subject to the trivial normalization constraint, we find p_i = 1/n as a maximum. Therefore, \hat{F} is the empirical distribution function.

Estimation Procedure

EL estimates are calculated by maximizing the empirical likelihood function (see above) subject to constraints based on the estimating function and the trivial assumption that the probability weights of the likelihood function sum to 1.Mittelhammer, Judge, and Miller (2000), 292. This procedure is represented as:

:

\max_{\pi_{i}, \theta} \ln(L)=

\max_{\pi_{i}, \theta} \sum_{i=1}^n \ln \pi_i

subject to the constraints

:

s.t. \sum_{i=1}^n\pi_i = 1, \sum_{i=1}^n\pi_i h(y_i;\theta) = 0,\forall i\in[1..n]\quad 0\le\pi_i.

{{cite journal |last1=Bera |first1=Anil K. |last2=Bilias |first2=Yannis |title=The MM, ME, ML, EL, EF and GMM approaches to estimation: a synthesis |journal=Journal of Econometrics |date=2002 |volume=107 |issue=1–2 |pages=51–86 |doi=10.1016/S0304-4076(01)00113-0}}{{rp|at=Equation (73)}}

The value of the theta parameter can be found by solving the Lagrangian function

:

\mathcal{L} = \sum_{i=1}^n \ln \pi_{i} + \mu (1- \sum_{i=1}^n \pi_{i})-n\tau' \sum_{i=1}^n \pi_{i} h(y_{i};\theta).

{{rp|at=Equation (74)}}

There is a clear analogy between this maximization problem and the one solved for maximum entropy.

The parameters \pi_i are nuisance parameters.

Empirical Likelihood Ratio (ELR)

An empirical likelihood ratio function is defined and used to obtain confidence intervals parameter of interest θ similar to parametric likelihood ratio confidence intervals.{{Cite journal|last=Owen|first=Art|date=1990-03-01|title=Empirical Likelihood Ratio Confidence Regions|journal=The Annals of Statistics|volume=18|issue=1|doi=10.1214/aos/1176347494|issn=0090-5364|doi-access=free}}{{Cite journal|last1=Dong|first1=Lauren Bin|last2=Giles|first2=David E. A.|date=2007-01-30|title=An Empirical Likelihood Ratio Test for Normality|url=http://dx.doi.org/10.1080/03610910601096544|journal=Communications in Statistics - Simulation and Computation|volume=36|issue=1|pages=197–215|doi=10.1080/03610910601096544|s2cid=16866055 |issn=0361-0918|url-access=subscription}} Let L(F) be the empirical likelihood of function F, then the ELR would be:

R(F)=L(F)/L(F_{n}).

Consider sets of the form

C = \{ T(F)| R(F) \geq r \}.

Under such conditions a test of T(F)=t rejects when t does not belong to C, that is, when no distribution F with T(F)=t has likelihood L(F) \geq rL(F_{n}).

The central result is for the mean of X. Clearly, some restrictions on F are needed, or else C = \reals^p whenever r < 1. To see this, let:

F = \epsilon \delta_{x} + (1- \epsilon) F_{n}

If \epsilon is small enough and \epsilon >0, then R(F) \geq r.

But then, as x ranges through \reals^p, so does the mean of F, tracing out C = \reals^p. The problem can be solved by restricting to distributions F that are supported in a bounded set. It turns out to be possible to restrict attention t distributions with support in the sample, in other words, to distribution F \ll F_{n}. Such method is convenient since the statistician might not be willing to specify a bounded support for F, and since t converts the construction of C into a finite dimensional problem.

Other Applications

The use of empirical likelihood is not limited to confidence intervals. In efficient quantile regression, an EL-based categorization{{Cite journal|last1=Chen|first1=Jien|last2=Lazar|first2=Nicole A.|date=2010-01-27|title=Quantile estimation for discrete data via empirical likelihood|url=http://dx.doi.org/10.1080/10485250903301525|journal=Journal of Nonparametric Statistics|volume=22|issue=2|pages=237–255|doi=10.1080/10485250903301525|s2cid=119684596 |issn=1048-5252|url-access=subscription}} procedure helps determine the shape of the true discrete distribution at level p, and also provides a way of formulating a consistent estimator. In addition, EL can be used in place of parametric likelihood to form model selection criteria.{{Cite journal|last1=Chen|first1=Chixiang|last2=Wang|first2=Ming|last3=Wu|first3=Rongling|last4=Li|first4=Runze|date=2022|title=A Robust Consistent Information Criterion for Model Selection Based on Empirical Likelihood|url=http://dx.doi.org/10.5705/ss.202020.0254|journal=Statistica Sinica|doi=10.5705/ss.202020.0254|doi-broken-date=2024-11-11 |arxiv=2006.13281 |s2cid=220042083 |issn=1017-0405}}

Empirical likelihood can naturally be applied in survival analysisZhou, M. (2015). Empirical Likelihood Method in Survival Analysis (1st ed.). Chapman and Hall/CRC. https://doi.org/10.1201/b18598 or regression problemsChen, Song Xi, and Ingrid Van Keilegom. "A review on empirical likelihood methods for regression." TEST volume 18, pages 415–447, (2009) https://doi.org/10.1007/s11749-009-0159-5

See also

Literature

  • Nordman, Daniel J., and Soumendra N. Lahiri. "A review of empirical likelihood methods for time series." Journal of Statistical Planning and Inference 155 (2014): 1-18. https://doi.org/10.1016/j.jspi.2013.10.001

References