Draft:Constrained minimum criterion

{{AFC submission|d|nn|u=IslanderMark|ns=118|decliner=Chumpih|declinets=20230424103142|ts=20230123012737}}

{{AFC comment|1=This appears to be a relatively new mechanism, with the key papers appearing in the past couple of years. There doesn't yet appear to be coverage of this Criterion which is independent and a secondary source, per WP:GNG. Chumpih t 10:31, 24 April 2023 (UTC)}}

----

{{Short description|statistical model selection}}

{{Draft topics|mathematics}}

{{AfC topic|stem}}

In statistics, the Constrained Minimum Criterion (CMC) is a criterion for selecting regression models founded on the classical theory of likelihood based inference. It is a frequentist alternative to the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) with certain advantages.

= Geometric motivation =

For a full regression model with p predictor variables and an intercept, the unknown vector of regression parameters \boldsymbol{\beta}^t is a (p+1)-vector. Elements of \boldsymbol{\beta}^t corresponding to active variables are non-zero, and elements corresponding to inactive variables are all zero. The likelihood ratio confidence region for \boldsymbol{\beta}^t is centred on its maximum likelihood estimator \hat{\boldsymbol{\beta}}. As the sample size n goes to infinity, the confidence region shrinks in size and degenerates into \hat{\boldsymbol{\beta}}, and \hat{\boldsymbol{\beta}} converges to \boldsymbol{\beta}^t. It follows that the whole confidence region converges to \boldsymbol{\beta}^t, so for sufficiently large n, elements of vectors in the confidence region corresponding to active variables are all non-zero. This implies that when \boldsymbol{\beta}^t is captured by the confidence region, it is a vector in the region having the most zeros in its elements. Because of this, the CMC chooses from the confidence region a vector with the most zeros in its elements as an estimate of \boldsymbol{\beta}^t, thereby selecting the model defined by variables corresponding to non-zero elements of the chosen vector.

=== Definition ===

Let {\cal M}=\{{M}_j\}^{2^p}_{j=1} be the collection of 2^p subsets of the p variables where each M_j represents a subset. Denote by \hat{\boldsymbol{\beta}}_{j} the maximum likelihood estimator for the vector of regression parameters of the reduced model defined by M_j. Augment \hat{\boldsymbol{\beta}}_{j} to be of dimension (p+1) by adding zeros to its elements to represent variables not in M_j. For a fixed \alpha \in (0,1), denote by {\cal C}_{1-\alpha} the 100(1-\alpha)\% likelihood ratio confidence region for \boldsymbol{\beta}^t which is a region in the (p+1)-dimensional space centred on \hat{\boldsymbol{\beta}}. The CMC chooses the model represented by the solution vector of the following constrained minimization problem,

:

\min_{\cal M} \|\hat{\boldsymbol{\beta}}_j\|_0

\text{ subject to } \hat{\boldsymbol{\beta}}_j \in {\cal C}_{1-\alpha},

where \|\cdot \|_0 denotes the L_0 norm. The solution vector is called the CMC solution, which is a sparse estimator of \boldsymbol{\beta}^t. Its corresponding model is called the CMC selection. When there are two or more solution vectors to the minimization problem, the one with the highest likelihood is chosen to be the CMC solution.{{cite journal |first1=Min |last1=Tsao |year=2023 |title=Regression model selection via log-likelihood ratio and constrained minimum criterion |journal=Canadian Journal of Statistics |volume=52 |pages=195–211 | doi=10.1002/cjs.11756 |arxiv=2107.08529 |s2cid=236087375 }}

=== Asymptotic properties ===

Let \hat{\boldsymbol{\beta}}_{\alpha} be the CMC solution and \hat{M}_\alpha be the corresponding CMC selection. Under regularity conditions for the asymptotic normality of the maximum likelihood estimator

\hat{\boldsymbol{\beta}}, (i) the CMC solution is consistent in that

: \hat{\boldsymbol{\beta}}_{\alpha} \stackrel{p}{\longrightarrow} \boldsymbol{\beta}^t

as n\rightarrow \infty,

and (ii) the probability that \hat{M}_\alpha is the true model has an asymptotic lower bound

: \lim_{n\rightarrow +\infty}P(\hat{M}_{\alpha} = M_j^t) \geq 1-\alpha,

where M_j^t denotes the unknown true model containing only and all active variables.

=== Tuning parameter ===

The tuning parameter \alpha controls the balance between the false active rate and false inactive rate of the selected model which is also the balance between the fit and the sparsity of the selected model. When the sample size n is large, the asymptotic lower bound in (ii) shows that setting \alpha to a small value will lead to a high probability that the CMC selection is the true model. When n is not large, a small \alpha will lead to a high false inactive rate, so a larger value should be used. The recommended default value is \alpha=0.5. At this default value, the CMC is often more accurate than the AIC and BIC in terms of false active rate and false inactive rate.

The tuning parameter \alpha makes it easy to adapt the CMC to special situations such as when n is small. The AIC and BIC both require special adjustments to their penalty terms for small n situations. The CMC can handle such situations with a simple change of the \alpha level.

In asymptotic properties (i) and (ii) above, the \alpha level is fixed. Stronger results may be obtained by allowing \alpha to vary with n. For selecting Gaussian linear models, one may let \alpha go to zero at a certain speed depending on n as n goes to infinity so that the CMC selection is consistent;{{cite journal |first1=Min |last1=Tsao |year=2021 |title=A constrained minimum method for model selection |journal=Stat |volume=10 | doi=10.1002/sta4.387 |s2cid=236549659 }} that is, one may find a sequence of tuning parameter values \alpha_n\rightarrow 0 such that

: \lim_{n\rightarrow +\infty}P(\hat{M}_{\alpha_n} = M_j^t) =1.

=== Computation ===

For the best subset selection, the AIC and BIC require the computation of the maximum likelihood of all 2^p models. The CMC may require far fewer. Denote by M_{-i} the model containing all variables except the ith variable \mathbf{x}_i. Denote by \hat{\boldsymbol{\beta}}_{-i} the maximum likelihood estimator and by \lambda(\hat{\boldsymbol{\beta}}_{-i}) the maximum log-likelihood ratio of this model. In some cases, the value of \lambda(\hat{\boldsymbol{\beta}}_{-i}) alone is sufficient to determine if \mathbf{x}_i will be selected by the CMC. One can first compute \lambda(\hat{\boldsymbol{\beta}}_{-i}) and use it to determine if \mathbf{x}_i will be selected for i=1,2,\dots, p. Suppose this has identified p' variables that will be selected. Then, one only needs to select from the remaining p-p' variables. The total number of models that need to be computed by the CMC is thus p+2^{p-p'} which could be substantially smaller than 2^p required by the AIC and BIC.

=== Remarks ===

Comprehensive discussions of model selection philosophies and criteria can be found in the literature.{{Cite journal|last1=Ding|first1=Jie|last2=Tarokh|first2=Vahid|last3=Yang|first3=Yuhong|date=2018|title=Model Selection Techniques: An Overview|journal=IEEE Signal Processing Magazine|volume=35|issue=6|pages=16–34|doi=10.1109/MSP.2018.2867638|arxiv=1810.09583 |bibcode=2018ISPM...35f..16D |s2cid=53035396 }}{{Cite journal|last1=Kadane|first1=J.B.|last2=Lazar|first2=N.A.|date=2004|title=Methods and criteria for model selection|journal=Journal of the American Statistical Association|volume=99|issue=465 |pages=279–290|doi=10.1198/016214504000000269|s2cid=3138924 }}{{Cite book|title=Subset selection in regression |edition=2nd |last=Miller|first=Alan|publisher=Chapman & Hall|year=2019|isbn=9780367396220}}

In other model selection strategies such as the AIC and BIC, the sparsity of the selected model comes as a by-product of the model selection process. By directly minimizing the size of the model subject to a lower bound constraint on the likelihood ratio, the CMC is the first model selection method to explicitly pursue the sparsity of the selected model.

See also

References

{{reflist}}

Further readings

  • {{Citation |last1=Burnham |first1=K. P. |last2=Anderson |first2=D. R. |year=2002 |title=Model Selection and Multimodel Inference: A practical information-theoretic approach |edition=2nd |publisher= Springer-Verlag }}.
  • {{Citation |last1=Claeskens |first1=G.|author1-link= Gerda Claeskens |first2=N. L. |last2=Hjort|author2-link=Nils Lid Hjort |year=2008 |title=Model Selection and Model Averaging |publisher=Cambridge University Press }}.
  • {{Citation |last1=Hurvich |first1=C. M. |last2=Tsai |first2=C.-L. |year=1989 |title=Regression and time series model selection in small samples |journal=Biometrika |volume=76 |issue= 2|pages=297–307 |doi= 10.1093/biomet/76.2.297}}.
  • {{Citation | last1=Konishi | first1=S. | last2= Kitagawa | first2=G. | year=2008 | title=Information Criteria and Statistical Modeling | publisher=Springer}}.
  • {{Citation |last1=McQuarrie |first1=A. D. R. |last2=Tsai |first2=C.-L. |year=1998 |title=Regression and Time Series Model Selection |publisher=World Scientific }}.