## Frequent Links

# Akaike information criterion

The **Akaike information criterion** (**AIC**) is a measure of the relative quality of a statistical model for a given set of data. That is, given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Hence, AIC provides a means for model selection.

AIC is founded on information theory: it offers a relative estimate of the information lost when a given model is used to represent the process that generates the data. In doing so, it deals with the trade-off between the goodness of fit of the model and the complexity of the model.

AIC does not provide a test of a model in the sense of testing a null hypothesis; i.e. AIC can tell nothing about the quality of the model in an absolute sense. If all the candidate models fit poorly, AIC will not give any warning of that.

## Contents

## Definition

Suppose that we have a statistical model of some data. Let *L* be the maximized value of the likelihood function for the model; let *k* be the number of estimated parameters in the model. Then the AIC value of the model is the following.^{[1]}^{[2]}

- <math>\mathrm{AIC} = 2k - 2\ln(L)</math>

Given a set of candidate models for the data, *the preferred model is the one with the minimum AIC value.* Hence AIC rewards goodness of fit (as assessed by the likelihood function), but it also includes a penalty that is an increasing function of the number of estimated parameters. The penalty discourages overfitting (increasing the number of parameters in the model almost always improves the goodness of the fit).

AIC is founded in information theory. Suppose that the data is generated by some unknown process *f*. We consider two candidate models to represent *f*: *g*_{1} and *g*_{2}. If we knew *f*, then we could find the information lost from using *g*_{1} to represent *f* by calculating the Kullback–Leibler divergence, *D*_{KL}(*f* ‖ *g*_{1}); similarly, the information lost from using *g*_{2} to represent *f* could be found by calculating *D*_{KL}(*f* ‖ *g*_{2}). We would then choose the candidate model that minimized the information loss.

We cannot choose with certainty, because we do not know *f*. Akaike (1974) showed, however, that we can estimate, via AIC, how much more (or less) information is lost by *g*_{1} than by *g*_{2}. The estimate, though, is only valid asymptotically; if the number of data points is small, then some correction is often necessary (see AICc, below).

## How to apply AIC in practice

To apply AIC in practice, we start with a set of candidate models, and then find the models' corresponding AIC values. There will almost always be information lost due to using a candidate model to represent the "true" model (i.e. the process that generates the data). We wish to select, from among the candidate models, the model that minimizes the information loss. We cannot choose with certainty, but we can minimize the estimated information loss.

Suppose that there are *R* candidate models. Denote the AIC values of those models by AIC_{1}, AIC_{2}, AIC_{3}, …, AIC_{R}. Let AIC_{min} be the minimum of those values. Then exp((AIC_{min} − AIC_{i})/2) can be interpreted as the relative probability that the *i*th model minimizes the (estimated) information loss.^{[3]}

As an example, suppose that there are three candidate models, whose AIC values are 100, 102, and 110. Then the second model is exp((100 − 102)/2) = 0.368 times as probable as the first model to minimize the information loss. Similarly, the third model is exp((100 − 110)/2) = 0.007 times as probable as the first model to minimize the information loss.

In this example, we would omit the third model from further consideration. We then have three options: (1) gather more data, in the hope that this will allow clearly distinguishing between the first two models; (2) simply conclude that the data is insufficient to support selecting one model from among the first two; (3) take a weighted average of the first two models, with weights 1 and 0.368, respectively, and then do statistical inference based on the weighted multimodel.^{[4]}

The quantity exp((AIC_{min} − AIC_{i})/2) is the *relative likelihood* of model *i*.

If all the models in the candidate set have the same number of parameters, then using AIC might at first appear to be very similar to using the likelihood-ratio test. There are, however, important distinctions. In particular, the likelihood-ratio test is valid only for nested models, whereas AIC (and AICc) has no such restriction.^{[5]}

## AICc

AICc is AIC with a correction for finite sample sizes. The formula for AICc depends upon the statistical model. Assuming that the model is univariate, linear, and has normally-distributed residuals (conditional upon regressors), the formula for AICc is as follows:^{[6]}^{[7]}

- <math>\mathrm{AICc} = \mathrm{AIC} + \frac{2k(k + 1)}{n - k - 1}</math>

where *n* denotes the sample size and *k* denotes the number of parameters.

If the assumption of a univariate linear model with normal residuals does not hold, then the formula for AICc will generally change. Even so, Burnham & Anderson (2002, §7.4) recommend using the above formula, unless a more precise correction is known. Further discussion of the formula, with examples of other assumptions, is given by Burnham & Anderson (2002, ch. 7) and Konishi & Kitagawa (2008, ch. 7–8). In particular, with other assumptions, bootstrap estimation of the formula is often feasible.

AICc is essentially AIC with a greater penalty for extra parameters. Using AIC, instead of AICc, when *n* is not many times larger than *k*^{2}, increases the probability of selecting models that have too many parameters, i.e. of overfitting. The probability of AIC overfitting can be substantial, in some cases.^{[8]}

Burnham & Anderson (2002) strongly recommend using AICc, rather than AIC, if *n* is small or *k* is large. Since AICc converges to AIC as *n* gets large, AICc generally should be employed regardless.^{[9]}

Brockwell & Davis (1991, p. 273) advise using AICc as the primary criterion in selecting the orders of an ARMA model for time series. McQuarrie & Tsai (1998) ground their high opinion of AICc on extensive simulation work with regression and time series.

Note that if all the candidate models have the same *k*, then AICc and AIC will give identical (relative) valuations; hence, there will no disadvantage in using AIC instead of AICc. Furthermore, if *n* is many times larger than *k*^{2}, then the correction will be negligible; hence, there will be negligible disadvantage in using AIC instead of AICc.

## History

The Akaike information criterion was developed by Hirotugu Akaike, originally under the name "an information criterion". It was first announced by Akaike at a 1971 symposium, the proceedings of which were published in 1973.^{[10]} The 1973 publication, though, was only an informal presentation of the concepts.^{[11]} The first formal publication was in a 1974 paper by Akaike.^{[2]} As of October 2014, the 1974 paper had received more than 14000 citations in the Web of Science: making it the 73rd most-cited research paper of all time.^{[12]}

The initial derivation of AIC relied upon some strong assumptions. Takeuchi (1976) showed that the assumptions could be made much weaker. Takeuchi's work, however, was in Japanese and was not widely known outside Japan for many years.

AICc was originally proposed for linear regression (only) by Sugiura (1978). That instigated the work of Hurvich & Tsai (1989), and several further papers by the same authors, which extended the situations in which AICc could be applied. The work of Hurvich & Tsai contributed to the decision to publish a second edition of the volume by Brockwell & Davis (1991), which is the standard reference for linear time series; the second edition states, "our prime criterion for model selection [among ARMA models] will be the AICc".^{[13]}

The first general exposition of the information-theoretic approach was the volume by Burnham & Anderson (2002). It includes an English presentation of the work of Takeuchi. The volume led to far greater use of the information-theoretic approach, and it now has more than 25000 citations on Google Scholar.

Akaike originally called his approach an "entropy maximization principle", because the approach is founded on the concept of entropy in information theory. Indeed, minimizing AIC in a statistical model is effectively equivalent to maximizing entropy in a thermodynamic system; in other words, the information-theoretic approach in statistics is essentially applying the Second Law of Thermodynamics. As such, AIC has roots in the work of Ludwig Boltzmann on entropy. For more on these issues, see Akaike (1985) and Burnham & Anderson (2002, ch. 2).

## Usage tips

### Counting parameters

A statistical model must fit all the data points. Thus, a straight line, on its own, is
not a model of the data, unless all the data points lie exactly on the line.
We can, however, choose a model that is "a straight line plus noise"; such a model might be formally described thus:
*y*_{i} = *b*_{0} + *b*_{1}*x*_{i} + ε_{i}. Here, the ε_{i} are the residuals from the straight line fit. If the ε_{i} are assumed to be i.i.d. Gaussian (with zero mean), then the model has three parameters:
*b*_{0}, *b*_{1}, and the variance of the Gaussian distributions.
Thus, when calculating the AIC value of this model, we should use *k*=3. More generally, for any least squares model with i.i.d. Gaussian residuals, the variance of the residuals’ distributions should be counted as one of the parameters.^{[14]}

As another example, consider a first-order autoregressive model, defined by
*x*_{i} = *c* + *φx*_{i−1} + ε_{i}, with the ε_{i} being i.i.d. Gaussian (with zero mean).
For this model, there are three parameters: *c*, *φ*, and the variance of the ε_{i}. More generally, a *p*th-order autoregressive model has *p* + 2 parameters.
(If, however, *c* is not estimated, but given in advance, then there are only *p* + 1 parameters.)

### Transforming data

The AIC values of the candidate models must all be computed with the same data set. Sometimes, though, we might want to compare a model of the data with a model of the logarithm of the data; more generally, we might want to compare a model of the data with a model of transformed data. Here is an illustration of how to deal with data transforms (adapted from Burnham & Anderson (2002, §2.11.3)).

Suppose that we want to compare two models: a normal distribution of the data and a normal distribution of the logarithm of the data. We should *not* directly compare the AIC values of the two models. Instead, we should transform the normal cumulative distribution function to first take the logarithm of the data. To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is 1/*x*. Hence, the transformed distribution has the following probability density function:

- <math>x \mapsto \, \frac{1}{x} \frac{1}{\sqrt{2\pi\sigma^2}}\,\exp \left(-\frac{\left(\ln x-\mu\right)^2}{2\sigma^2}\right)</math>

—which is the probability density function for the log-normal distribution. We then compare the AIC value of the normal model against the AIC value of the log-normal model.

### Software unreliability

Some statistical software will report the value of AIC or the maximum value of the log-likelihood function, but the reported values are not always correct.
Typically, any incorrectness is due to a constant in the log-likelihood function being omitted. For example,
the log-likelihood function for *n* independent identical normal distributions is

- <math>

\ln\mathcal{L}(\mu,\sigma^2) = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i-\mu)^2 </math>

—this is the function that is maximized, when obtaining the value of AIC. Some software, however, omits the term
(*n*/2)ln(2*π*), and so reports erroneous values for the log-likelihood maximum—and thus for AIC. Such errors do not matter for AIC-based comparisons, *if* all the models have their residuals as normally-distributed: because then the errors cancel out. In general, however, the constant term needs to be included in the log-likelihood function.^{[15]} Hence, before using software to calculate AIC, it is generally good practice to run some simple tests on the software, to ensure that the function values are correct.

## Comparisons with other model selection methods

### Comparison with BIC

The AIC penalizes the number of parameters less strongly than does the Bayesian information criterion (BIC). A comparison of AIC/AICc and BIC is given by Burnham & Anderson (2002, §6.4). The authors show that AIC and AICc can be derived in the same Bayesian framework as BIC, just by using a different prior. The authors also argue that AIC/AICc has theoretical advantages over BIC. First, because AIC/AICc is derived from principles of information; BIC is not, despite its name. Second, because the (Bayesian-framework) derivation of BIC has a prior of 1/*R* (where *R* is the number of candidate models), which is "not sensible", since the prior should be a decreasing function of *k*. Additionally, they present a few simulation studies that suggest AICc tends to have practical/performance advantages over BIC. See too Burnham & Anderson (2004).

Further comparison of AIC and BIC, in the context of regression, is given by Yang (2005). In particular, AIC is asymptotically optimal in selecting the model with the least mean squared error, under the assumption that the exact "true" model is not in the candidate set (as is virtually always the case in practice); BIC is not asymptotically optimal under the assumption. Yang additionally shows that the rate at which AIC converges to the optimum is, in a certain sense, the best possible.

For a more detailed comparison of AIC and BIC, see Aho et al. (2014).

### Comparison with Chi-squared testing

#### General case

Often, we want to select amongst candidate models where all the likelihood functions assume that the residuals are normally distributed (with mean zero) and independent. That assumption leads to chi-squared tests, based upon the *χ*² distribution (and related to *R*^{2}). Using chi-squared tests turns out to be related to using AIC.

By our assumption, the maximum likelihood is given by

- <math>L=\prod_{i=1}^n \left(\frac{1}{2 \pi \hat{\sigma_i}^2}\right)^{1/2} \exp \left( -\sum_{i=1}^{n}\frac{(y_i-f(x_i;\hat{\theta}))^2}{2\hat{\sigma_i}^2}\right)</math>
- <math>\therefore \, \ln(L) = \ln\left(\prod_{i=1}^n\left(\frac{1}{2\pi\hat{\sigma_i}^2}\right)^{1/2}\right) - \frac{1}{2}\sum_{i=1}^n \frac{(y_i-f(x_i;\hat{\theta}))^2}{\hat{\sigma_i}^2}</math>
- <math>\therefore \, \ln(L) = C - \chi^2/2 \,</math>,

where *C* is a constant independent of the model used, and dependent only on the use of particular data points, i.e. it does not change if the data do not change.

Thus AIC = 2*k* − 2ln(*L*) = 2*k* − 2(*C* − *χ*²/2) = 2*k* − 2*C* + *χ*². Because only differences in AIC are meaningful, the constant *C* can be ignored, allowing us to take AIC = 2*k* + *χ*² for model comparisons.

#### Equal-variances case

An especially convenient expression for AIC can be obtained in the case where the *σ*_{i} are assumed to be identical (i.e. *σ*_{i} = *σ*), and *σ* is unknown. In this case, the maximum-likelihood estimate for *σ*^{2} is RSS/*n*, where RSS is the residual sum of squares: <math>\textstyle \mathrm{RSS} = \sum_{i=1}^n (y_i-f(x_i;\hat{\theta}))^2</math>. That gives AIC = 2*k* + *n* ln(RSS/*n*) + *C*_{1} = 2*k* + *n* ln(RSS) + *C*_{2}.^{[16]} As above, the constant can be ignored in model comparisons.

### Comparison with cross-validation

Leave-one-out cross-validation is asymptotically equivalent to the AIC, for ordinary linear regression models.^{[17]} Such asymptotic equivalence also holds for mixed-effects models.^{[18]}

### Comparison with Mallows's *C*_{p}

_{p}

Mallows's *C _{p}* is equivalent to AIC in the case of (Gaussian) linear regression.

^{[19]}

## See also

- Deviance information criterion
- Focused information criterion
- Hannan–Quinn information criterion
- Occam's razor
- Principle of maximum entropy

## Notes

**^**Burnham & Anderson 2002, §2.2- ^
^{a}^{b}Akaike 1974 **^**Burnham & Anderson 2002, §6.4.5**^**Burnham & Anderson 2002**^**Burnham & Anderson 2002, §2.12.4**^**Burnham & Anderson 2002**^**Cavanaugh 1997**^**Claeskens & Hjort 2008, §8.3**^**Burnham & Anderson 2004**^**Akaike 1973**^**deLeeuw 1992**^**Van Noordon R., Maher B., Nuzzo R. (2014), "The top 100 papers",*Nature*, 514.**^**Brockwell & Davis 1991, p. 273**^**Burnham & Anderson 2002, p. 63**^**Burnham & Anderson 2002, p. 82**^**Burnham & Anderson 2002, p. 63**^**Stone 1977**^**Fang 2011**^**Boisbunon et al. 2014

## References

- Aho, K.; Derryberry, D.; Peterson, T. (2014), "Model selection for ecologists: the worldviews of AIC and BIC",
*Ecology***95**: 631–636, doi:10.1890/13-1452.1. - Akaike, H. (1973), "Information theory and an extension of the maximum likelihood principle", in Petrov, B.N.; Csáki, F.,
*2nd International Symposium on Information Theory, Tsahkadsor, Armenia, USSR, September 2-8, 1971*, Budapest: Akadémiai Kiadó, p. 267-281. - Akaike, H. (1974), "A new look at the statistical model identification" (PDF),
*IEEE Transactions on Automatic Control***19**(6): 716–723, MR 0423716, doi:10.1109/TAC.1974.1100705. - Akaike, H. (1985), "Prediction and entropy", in Atkinson, A.C.; Fienberg, S.E.,
*A Celebration of Statistics*, Springer, p. 1-24. - Boisbunon, A.; Canu, S.; Fourdrinier, D.; Strawderman, W.; Wells, M. T. (2014), "Akaike's Information Criterion,
*C*and estimators of loss for elliptically symmetric distributions",_{p}*International Statistical Review***82**: 422–439, doi:10.1111/insr.12052. - Brockwell, P. J.; Davis, R. A. (1987),
*Time Series: Theory and Methods*, Springer, ISBN 0387964061. - Brockwell, P. J.; Davis, R. A. (1991),
*Time Series: Theory and Methods*(2nd ed.), Springer, ISBN 0387974296. Republished in 2009: ISBN 1441903194. - Burnham, K. P.; Anderson, D. R. (2002),
*Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach*(2nd ed.), Springer-Verlag, ISBN 0-387-95364-7. - Burnham, K. P.; Anderson, D. R. (2004), "Multimodel inference: understanding AIC and BIC in Model Selection" (PDF),
*Sociological Methods & Research***33**: 261–304, doi:10.1177/0049124104268644. - Cavanaugh, J. E. (1997), "Unifying the derivations of the Akaike and corrected Akaike information criteria",
*Statistics & Probability Letters***31**: 201–208, doi:10.1016/s0167-7152(96)00128-9. - Claeskens, G.; Hjort, N. L. (2008),
*Model Selection and Model Averaging*, Cambridge University Press. - deLeeuw, J. (1992), "Introduction to Akaike (1973) information theory and an extension of the maximum likelihood principle" (PDF), in Kotz, S.; Johnson, N.L.,
*Breakthroughs in Statistics I*, Springer, p. 599-609. - Fang, Yixin (2011), "Asymptotic equivalence between cross-validations and Akaike Information Criteria in mixed-effects models" (PDF),
*Journal of Data Science***9**: 15–21. - Hurvich, C. M.; Tsai, C.-L. (1989), "Regression and time series model selection in small samples",
*Biometrika***76**: 297–307, doi:10.1093/biomet/76.2.297. - Konishi, S.; Kitagawa, G. (2008),
*Information Criteria and Statistical Modeling*, Springer. - McQuarrie, A. D. R.; Tsai, C.-L. (1998),
*Regression and Time Series Model Selection*, World Scientific, ISBN 981-02-3242-X. - Stone, M. (1977), "An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike's Criterion",
*Journal of the Royal Statistical Society: Series B (Methodological)***39**(1): 44–47, retrieved 18 October 2014. - Sugiura, N. (1978), "Further analysis of the data by Akaike’s information criterion and the finite corrections",
*Communications in Statistics - Theory and Methods***A7**: 13–26. - Takeuchi, K. (1976), " " [Distribution of informational statistics and a criterion of model fitting],
*Suri-Kagaku [Mathematical Sciences]*(in Japanese)**153**: 12–18. - Yang, Y. (2005), "Can the strengths of AIC and BIC be shared?",
*Biometrika***92**: 937–950, doi:10.1093/biomet/92.4.937.

## Further reading

- Anderson, D. R. (2008),
*Model Based Inference in the Life Sciences*, Springer. - Liu, W.; Yang, Y. (2011), "Parametric or nonparametric?",
*Annals of Statistics***39**: 2074–2102, doi:10.1214/11-AOS899. - Pan, W. (2001), "Akaike's information criterion in generalized estimating equations",
*Biometrics***57**: 120–125, doi:10.1111/j.0006-341X.2001.00120.x. - Parzen, E.; Tanabe, K.; Kitagawa, G., eds. (1998),
*Selected Papers of Hirotugu Akaike*, Springer, doi:10.1007/978-1-4612-1694-0. - Saefken, B.; Kneib, T.; van Waveren, C.-S.; Greven, S. (2014), "A unifying approach to the estimation of the conditional Akaike information in generalized linear mixed models",
*Electronic Journal of Statistics***8**: 201–225, doi:10.1214/14-EJS881.

## External links

- Hirotogu Akaike comments on how he arrived at the AIC, in
*This Week's Citation Classic*(21 December 1981) - AIC (Aalto University)
- Akaike Information Criterion (North Carolina State University)
- Example AIC use (Honda USA, Noesis Solutions, Belgium)
- Model Selection (University of Iowa)