Frequent Links
Mixed model
Regression analysis 

File:Linear regression.svg 
Models 

Estimation 
Background 
A mixed model is a statistical model containing both fixed effects and random effects, that is mixed effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (longitudinal study), or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA.
Contents
History and current status
Ronald Fisher introduced random effects models to study the correlations of trait values between relatives.^{[1]} In the 1950s, Charles Roy Henderson provided best linear unbiased estimates (BLUE) of fixed effects and best linear unbiased predictions (BLUP) of random effects.^{[2]}^{[3]}^{[4]}^{[5]} Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, nonlinear mixed effect models, missing data in mixed effects models, and Bayesian estimation of mixed effects models. Mixed models are applied in many disciplines where multiple correlated measurements are made on each unit of interest. They are prominently used in research involving human and animal subjects in fields ranging from genetics to marketing, and have also been used in industrial statistics.^{[citation needed]}
Definition
In matrix notation a mixed model can be represented as
 <math>\boldsymbol{y} = X \boldsymbol{\beta} + Z \boldsymbol{u} + \boldsymbol{\epsilon}</math>
where
 <math>\boldsymbol{y}</math> is a known vector of observations, with mean <math>E(\boldsymbol{y}) = X \boldsymbol{\beta}</math>;
 <math>\boldsymbol{\beta}</math> is an unknown vector of fixed effects;
 <math>\boldsymbol{u}</math> is an unknown vector of random effects, with mean <math>E(\boldsymbol{u})=\boldsymbol{0}</math> and variancecovariance matrix <math>\operatorname{var}(\boldsymbol{u})=G</math>;
 <math>\boldsymbol{\epsilon}</math> is an unknown vector of random errors, with mean <math>E(\boldsymbol{\epsilon})=\boldsymbol{0}</math> and variance <math>\operatorname{var}(\boldsymbol{\epsilon})=R</math>;
 <math>X</math> and <math>Z</math> are known design matrices relating the observations <math>\boldsymbol{y}</math> to <math>\boldsymbol{\beta}</math> and <math>\boldsymbol{u}</math>, respectively.
Estimation
The joint density of <math>\boldsymbol{y}</math> and <math>\boldsymbol{u}</math> can be written as: <math>f(\boldsymbol{y},\boldsymbol{u}) = f(\boldsymbol{y}  \boldsymbol{u}) \, f(\boldsymbol{u})</math>. Assuming normality, <math>\boldsymbol{u} \sim \mathcal{N}(\boldsymbol{0},G)</math>, <math>\boldsymbol{\epsilon} \sim \mathcal{N}(\boldsymbol{0},R)</math> and <math>Cov(\boldsymbol{u},\boldsymbol{\epsilon})=\boldsymbol{0}</math>, and maximizing the joint density for <math>\boldsymbol{\beta}</math> and <math>\boldsymbol{u}</math>, gives Henderson's "mixed model equations" (MME):^{[6]}^{[2]}^{[4]}
 <math>
\begin{pmatrix} X'R^{1}X & X'R^{1}Z \\ Z'R^{1}X & Z'R^{1}Z + G^{1} \end{pmatrix} \begin{pmatrix} \hat{\boldsymbol{\beta}} \\ \hat{\boldsymbol{u}} \end{pmatrix} = \begin{pmatrix} X'R^{1}\boldsymbol{y} \\ Z'R^{1}\boldsymbol{y} \end{pmatrix} </math>
The solutions to the MME, <math>\textstyle\hat{\boldsymbol{\beta}}</math> and <math>\textstyle\hat{\boldsymbol{u}}</math> are best linear unbiased estimates (BLUE) and predictors (BLUP) for <math>\boldsymbol{\beta}</math> and <math>\boldsymbol{u}</math>, respectively. This is a consequence of the GaussMarkov theorem when the conditional variance of the outcome is not scalable to the identity matrix. When the conditional variance is known, then the inverse variance weighted least squares estimate is BLUE. However, the conditional variance is rarely, if ever, known. So it is desirable to jointly estimate the variance and weighted parameter estimates when solving MMEs.
One method used to fit such mixed models is that of the EM algorithm^{[7]} where the variance components are treated as unobserved nuisance parameters in the joint likelihood. Currently, this is the implemented method for the major statistical software packages R (lme in the nlme library) and SAS (proc mixed). The solution to the mixed model equations is a maximum likelihood estimate when the distribution of the errors is normal.^{[8]}^{[9]}
See also
 Fixed effects model
 Generalized linear mixed model
 Linear regression
 Mixeddesign analysis of variance
 Multilevel model
 Random effects model
 Repeated measures design
References
 ^ Fisher, RA (1918). "The correlation between relatives on the supposition of Mendelian inheritance". Transactions of the Royal Society of Edinburgh 52 (2): 399–433. doi:10.1017/S0080456800012163.
 ^ ^{a} ^{b} Robinson, G.K. (1991). "That BLUP is a Good Thing: The Estimation of Random Effects". Statistical Science 6 (1): 15–32. JSTOR 2245695. doi:10.1214/ss/1177011926.
 ^ C. R. Henderson, Oscar Kempthorne, S. R. Searle and C. M. von Krosigk (1959). "The Estimation of Environmental and Genetic Trends from Records Subject to Culling". Biometrics (International Biometric Society) 15 (2): 192–218. JSTOR 2527669. doi:10.2307/2527669.
 ^ ^{a} ^{b} L. Dale Van Vleck. "Charles Roy Henderson, April 1, 1911 – March 14, 1989" (PDF). United States National Academy of Sciences.
 ^ McLean, Robert A.; Sanders, William L.; Stroup, Walter W. (1991). "A Unified Approach to Mixed Linear Models". The American Statistician (American Statistical Association) 45 (1): 54–64. JSTOR 2685241. doi:10.2307/2685241.
 ^ Henderson, C R (1973). "Sire evaluation and genetic trends" (PDF). Journal of Animal Science (American Society of Animal Science) 1973: 10–41. Retrieved 17 August 2014.
 ^ Lindstrom, ML; Bates, DM (1988). "NewtonRaphson and EM algorithms for linear mixedeffects models for repeatedmeasures data". JASA 83 (404): 1014–1021. doi:10.1080/01621459.1988.10478693.
 ^ Laird, Nan M.; Ware, James H. (1982). "RandomEffects Models for Longitudinal Data". Biometrics (International Biometric Society) 38 (4): 963–974. JSTOR 2529876. PMID 7168798. doi:10.2307/2529876.
 ^ Garrett M. Fitzmaurice, Nan M. Laird, and James H. Ware, 2004. Applied Longitudinal Analysis. John Wiley & Sons, Inc., p. 326328.
Further reading
 Milliken, G. A., & Johnson, D. E. (1992). Analysis of messy data: Vol. I. Designed experiments. New York: Chapman & Hall.
 ParkerIida, R., & AlMurrani, A. (2014). Experimental Mixed Effects. NBER Working Paper 14542
 West, B. T., Welch, K. B., & Galecki, A. T. (2007). Linear mixed models: A practical guide using statistical software. New York: Chapman & Hall/CRC.
Commercial
 NCSS (statistical software) includes longitudinal mixed models analysis.