econometrics of risk
{{Short description|Econometric analysis of financial risk}}
The econometrics of risk is a specialized field within econometrics that focuses on the quantitative modeling and statistical analysis of risk in various economic and financial contexts. It integrates mathematical modeling, probability theory, and statistical inference to assess uncertainty, measure risk exposure, and predict potential financial losses. The discipline is widely applied in financial markets, insurance, macroeconomic policy, and corporate risk management.
Historical Development
The econometrics of risk emerged from centuries of interdisciplinary advancements in mathematics, economics, and decision theory. Drawing on Sakai’s framework, its evolution is categorized into six distinct stages, each shaped by pivotal thinkers and historical events:{{Citation |last=Sakai |first=Yasuhiro |title=On the Economics of Risk and Uncertainty: A Historical Perspective |date=2019 |work=J.M. Keynes Versus F.H. Knight: Risk, Probability, and Uncertainty |series=Evolutionary Economics and Social Complexity Science |volume=18 |pages=17–37 |editor-last=Sakai |editor-first=Yasuhiro |url=https://link.springer.com/chapter/10.1007/978-981-13-8000-6_2 |access-date=2025-05-16 |place=Singapore |publisher=Springer |language=en |doi=10.1007/978-981-13-8000-6_2 |isbn=978-981-13-8000-6}}
1. Initial (Pre-1700)
- Foundations in probability: Blaise Pascal and Pierre de Fermat formalized probability theory in 1654 through their correspondence on gambling problems (“the problem of points”). Pascal’s work extended to philosophical debates, such as Pascal's wager, framed through early utility concepts.
- Risk-Sharing Mechanisms: Early institutions like Lloyd's Coffee House (1688), marine insurance, and stock exchanges (e.g., London Stock Exchange, founded in 1801, with precursors as early as 1571) addressed practical risks in trade and exploration.
- Limitations: Mercantilism dominated but lacked formal economic frameworks.
2. 1700–1880: Bernoulli and Adam Smith
- Daniel Bernoulli: Introduced expected utility theory (1738) to resolve the St. Petersburg paradox, replacing expected monetary value with logarithmic utility functions.
- Adam Smith: In The Wealth of Nations (1776), analyzed risk-bearing in markets, noting behavioral biases.
- Key Events: The rise of insurance (e.g., Tokio Marine Nichido, founded 1879) and political upheavals (e.g., American Independence (1776); French Revolution (1789)) highlighted the need for systematic risk thinking.
3. 1880–1940: Keynes and Knight
- Keynes: In A Treatise on Probability (1921), distinguished between risk (quantifiable) and uncertainty (non-quantifiable), introducing concepts like animal spirits.
- Frank H. Knight: In Risk, Uncertainty and Profit (1921), emphasized that profit arises from non-insurable uncertainty.
- Context: World War I, Great Depression, and 1923 Great Kantō earthquake shaped early economic risk studies.
4. 1940–1970: Von Neumann and Morgenstern
- Game Theory: John von Neumann and Oskar Morgenstern introduced expected utility hypothesis in their 1944 work Theory of Games and Economic Behavior.
- Portfolio Theory: Harry Markowitz (1952) developed mean-variance analysis for optimizing risk-return trade-offs.
- Post-War Shifts: Advances in computation and cold war economics (e.g., Monte Carlo simulations) supported risk modeling.
5. 1970–2000: Arrow, Akerlof, Spence, and Stiglitz
- Information Asymmetry: George Akerlof’s “The Market for Lemons” (1970), Michael Spence’s signaling theory, and Joseph Stiglitz’s screening models reshaped credit risk modeling.
- Volatility Models: Robert F. Engle’s ARCH (1982) and Tim Bollerslev’s GARCH (1986) enabled time-varying volatility estimation.
- Crises: Events like the 1973 oil crisis, Chernobyl disaster (1986), and the dissolution of the Soviet Union (1991) exposed model weaknesses.
6. Uncertain Age (2000–Present)
- Systemic Risks: The 2008 financial crisis and Fukushima nuclear disaster (2011) revealed the limitations of VaR models.
- New Tools: Machine learning, extreme value theory, and Bayesian networks are increasingly applied to model tail risk.
- Regulation: Basel III and Basel IV standards emphasize stress testing and liquidity risk.
Key Econometric Models in Risk Analysis
= Traditional Latent Variable Models =
Econometric models frequently embed deterministic utility differences into a cumulative distribution function (CDF), allowing analysts to estimate decision-making under uncertainty. A common example is the binary logit model:
This setup assumes a homoscedastic logistic error term, which can result in systematic distortions in risk preferences estimation if scale is ignored.{{Citation |last=Train |first=Kenneth E. |title=Properties of Discrete Choice Models |work=Discrete Choice Methods with Simulation |date=2003 |pages=15–37 |url=https://doi.org/10.1017/cbo9780511753930.003 |access-date=2025-05-23 |place=Cambridge |publisher=Cambridge University Press|doi=10.1017/cbo9780511753930.003 |isbn=978-0-511-75393-0 }}
= Contextual Utility Model =
To address scale confounds in standard models, Wilcox (2011) proposed the Contextual Utility (CU) model. It divides the utility difference by the contextual range of all option pairs in the choice set:
This model satisfies several desirable properties, including monotonicity, stochastic dominance, and contextual scale invariance.{{Cite journal |last=Wilcox |first=Nathaniel T. |date=May 2011 |title='Stochastically more risk averse:' A contextual theory of stochastic discrete choice under risk |url=https://doi.org/10.1016/j.jeconom.2009.10.012 |journal=Journal of Econometrics |volume=162 |issue=1 |pages=89–104 |doi=10.1016/j.jeconom.2009.10.012 |issn=0304-4076}}
= Random Preference Models =
Random preference models assume agents draw their preferences from a population distribution, generating heterogeneity in observed choices:
This framework accounts for preference variation across individuals and enables richer modeling in panel data and experimental contexts.{{Cite journal |last1=McFadden |first1=Daniel |last2=Train |first2=Kenneth |date=September 2000 |title=Mixed MNL models for discrete response |url=https://doi.org/10.1002/1099-1255(200009/10)15:5<447::aid-jae570>3.3.co;2-t |journal=Journal of Applied Econometrics |volume=15 |issue=5 |pages=447–470 |doi=10.1002/1099-1255(200009/10)15:5<447::aid-jae570>3.3.co;2-t |issn=0883-7252}}
= Credit Risk Models =
Binary classification models are extensively used in credit scoring. For instance, the probit model for default risk is:
Alternatively, in duration-based settings, proportional hazards models are common:
Here, is the baseline hazard, and are borrower characteristics.{{Cite journal |last1=Altman |first1=Edward I |last2=Saunders |first2=Anthony |date=1997-12-01 |title=Credit risk measurement: Developments over the last 20 years |url=https://www.sciencedirect.com/science/article/pii/S0378426697000368 |journal=Journal of Banking & Finance |volume=21 |issue=11 |pages=1721–1742 |doi=10.1016/S0378-4266(97)00036-8 |issn=0378-4266|doi-access=free }}
= Insurance Risk Models =
Insurance econometrics often uses frequency-severity models. The expected aggregate claims are the product of the expected number of claims and expected claim size:
Typically, follows a Poisson distribution and may follow Gamma or Pareto distributions.{{Cite book |last=Frees |first=Edward W. |url=https://www.cambridge.org/core/books/regression-modeling-with-actuarial-and-financial-applications/25C768AB6FFE4FAD5F2AD725D8643C18 |title=Regression Modeling with Actuarial and Financial Applications |date=2009 |publisher=Cambridge University Press |isbn=978-0-521-76011-9 |series=International Series on Actuarial Science |location=Cambridge |doi=10.1017/cbo9780511814372}}
= Marketing Risk Models =
In marketing analytics, rare event models are used to study infrequent purchases or churn behavior. The zero-inflated Poisson (ZIP) model is common:
\Pr(Y = 0) = \pi + (1 - \pi) e^{-\lambda}, \quad
\Pr(Y = y) = (1 - \pi) \frac{\lambda^y e^{-\lambda}}{y!}, \quad y > 0
Mixed logit models allow for random taste variation:
\Pr(y_{it} = j) = \int \frac{e^{x_{ijt}'\beta}}{\sum_k e^{x_{ikt}'\beta}} f(\beta) \, d\beta
These are useful when modeling risk-averse consumer behavior and product choice under uncertainty.{{Citation |last=Train |first=Kenneth E. |title=Properties of Discrete Choice Models |work=Discrete Choice Methods with Simulation |date=2003 |pages=15–37 |url=https://doi.org/10.1017/cbo9780511753930.003 |access-date=2025-05-23 |place=Cambridge |publisher=Cambridge University Press|doi=10.1017/cbo9780511753930.003 |isbn=978-0-511-75393-0 }}
= Volatility models (ARCH/GARCH/SV) =
Autoregressive conditional heteroskedasticity models (ARCH) allow conditional variance to depend on past shocks, capturing volatility clustering. Bollerslev’s GARCH model generalizes ARCH by including lagged variances. Exponential GARCH (EGARCH) and other variants capture asymmetries (e.g. leverage effects). A distinct class is Stochastic Volatility (SV) models, which assume volatility follows its own latent stochastic process (e.g. Taylor 1986). These models are central to financial risk, used to forecast time-varying risk and for derivative pricing.{{Cite journal |last=Engle |first=Robert |date=2004-05-01 |title=Risk and Volatility: Econometric Models and Financial Practice |url=https://doi.org/10.1257/0002828041464597 |journal=American Economic Review |volume=94 |issue=3 |pages=405–420 |doi=10.1257/0002828041464597 |issn=0002-8282}}
= Risk measures (VaR, Expected Shortfall) and quantile methods =
Econometrician estimate risk measures like value at risk (VaR) and expected shortfall (ES) using both parametric and nonparametric methods. For example, extreme value theory (EVT) can be used to model tail risk in financial returns, yielding estimates of high-quantile losses. Jon Danielsson (1998) note that traditional models (often assuming normality) tend to underestimate tail risk, leading to applications of EVT to VaR estimation. Quantile regression is another tool for VaR forecasting: by directly modeling a conditional quantile of returns, one can estimate the maximum expected loss at a given confidence level.{{Cite journal |last1=Danielsson |first1=Jon |last2=de Vries |first2=Casper G. |last3=Jorgensen |first3=Bjorn N. |date=1998 |title=The Value of Value at Risk: Statistical, Financial, and Regulatory Considerations |url=https://doi.org/10.2139/ssrn.1029663 |journal=SSRN Electronic Journal |doi=10.2139/ssrn.1029663 |issn=1556-5068}}
= Advanced Techniques =
- Copula Models: Used for multivariate risk modeling where marginal distributions are known, and the dependency structure is modeled separately:
Where is the copula function (e.g., Clayton, Gumbel, Gaussian).{{Cite journal |last1=Trivedi |first1=Pravin K |last2=Zimmer |first2=David M |date=2006 |title=Copula Modeling: An Introduction for Practitioners |url=https://doi.org/10.1561/0800000005 |journal=Foundations and Trends® in Econometrics |volume=1 |issue=1 |pages=1–111 |doi=10.1561/0800000005 |issn=1551-3076|doi-access=free }}
- Regularization Techniques: In high-dimensional settings, LASSO is used to prevent overfitting and improve model selection:
\min_\beta \left( \frac{1}{2n} \sum_{i=1}^{n} (y_i - x_i'\beta)^2 + \lambda \sum_{j=1}^{p} |\beta_j| \right)
LASSO is increasingly adopted in predictive risk modeling for credit scoring, insurance, and marketing applications.{{Cite journal |last=Tibshirani |first=Robert |date=1996 |title=Regression Shrinkage and Selection via the Lasso |url=https://www.jstor.org/stable/2346178 |journal=Journal of the Royal Statistical Society. Series B (Methodological) |volume=58 |issue=1 |pages=267–288 |jstor=2346178 |issn=0035-9246}}
Bibliography
- Wilcox, Nathaniel T. (May 2011). Journal of econometrics. 'Stochastically more risk averse:' A contextual theory of stochastic discrete choice under risk. Volume 162. pp. 89–104. {{doi|10.1016/j.jeconom.2009.10.012}}; {{issn|0304-4076}}
- '11. Management of Credit Risk', The Econometrics of Individual Risk (December 2011). Princeton University Press. pp. 208–238. {{doi|10.1515/9781400829415.209}}; {{isbn|978-1-4008-2941-5}}