Frequent Links
Quantile
Quantiles are values taken at regular intervals from the inverse of the cumulative distribution function (CDF) of a random variable. Dividing ordered data into <math>q</math> essentially equal-sized data subsets is the motivation for <math>q</math>-quantiles; the quantiles are the data values marking the boundaries between consecutive subsets. Put another way, a <math>k^\mathrm{th}</math> <math>q</math>-quantile for a random variable is a value <math>x</math> such that the probability that the random variable will be less than <math>x</math> is at most <math>k/q</math> and the probability that the random variable will be greater than <math>x</math> is at most <math>(q-k)/q=1-(k/q)</math>. There are <math>q-1</math> of the <math>q</math>-quantiles, one for each integer <math>k</math> satisfying <math>0 < k < q</math>. In some cases the value of a quantile may not be uniquely determined, as can be the case for the median of a uniform probability distribution on a set of even size.
Contents
Specialized quantiles
Some q-quantiles have special names:^{[citation needed]}
- The only 2-quantile is called the median
- The 3-quantiles are called tertiles or terciles → T
- The 4-quantiles are called quartiles → Q
- The 5-quantiles are called quintiles → QU
- The 6-quantiles are called sextiles → S
- The 10-quantiles are called deciles → D
- The 12-quantiles are called duo-deciles → Dd
- The 20-quantiles are called vigintiles → V
- The 100-quantiles are called percentiles → P
- The 1000-quantiles are called permilles → Pr
More generally, one can consider the quantile function for any distribution. This is defined for real variables between zero and one and is mathematically the inverse of the cumulative distribution function.
Quantiles of a population
For a population of discrete values, or for a continuous population density, the <math>k</math>th <math>q</math>-quantile is the data value where the cumulative distribution function crosses <math>k/q.</math> That is, <math>x</math> is a <math>k</math>th <math>q</math>-quantile for a variable <math>X</math> if
- <math>\Pr[X < x] \le k/q</math> (or equivalently, <math>\Pr[X \ge x] \ge 1-k/q</math>)
and
- <math>\Pr[X \le x] \ge k/q</math> (or equivalently, <math>\Pr[X > x] \le 1-k/q</math>).
For a finite population of <math>N</math> values indexed 1,...,<math>N</math> from lowest to highest, the <math>k</math>th <math>q</math>-quantile of this population can be computed via the value of <math>I_p = N \frac{k}{q}</math>. If <math>I_p</math> is not an integer, then round up to the next integer to get the appropriate index; the corresponding data value is the <math>k</math>th <math>q</math>-quantile. On the other hand, if <math>I_p</math> is an integer then any number from the data value at that index to the data value of the next can be taken as the quantile, and it is conventional (though arbitrary) to take the average of those two values (see Estimating the quantiles).
If, instead of using integers <math>k</math> and <math>q</math>, the “<math>p</math>-quantile” is based on a real number <math>p</math> with <math>0<p<1</math>, then <math>p</math> replaces <math>k/q</math> in the above formulae. Some software programs (including Microsoft Excel) regard the minimum and maximum as the 0th and 100th percentile, respectively; however, such terminology is an extension beyond traditional statistics definitions.
Examples
The following two examples use the Nearest Rank definition of quantile with rounding. For an explanation of this definition, see percentiles.
Even-sized population
Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 4-quantiles (the "quartiles") of this dataset?
Quartile | Calculation | Result |
---|---|---|
Zeroth quartile | Although not universally accepted, one can also speak of the zeroth quartile. This is the minimum value of the set, so the zeroth quartile in this example would be 3. | 3 |
First quartile | The rank of the first quartile is 10×(1/4) = 2.5, which rounds up to 3, meaning that 3 is the rank in the population (from least to greatest values) at which approximately 1/4 of the values are less than the value of the first quartile. The third value in the population is 7. | 7 |
Second quartile | The rank of the second quartile (same as the median) is 10×(2/4) = 5, which is an integer, while the number of values (10) is an even number, so the average of both the fifth and sixth values is taken—that is (8+10)/2 = 9, though any value from 8 through to 10 could be taken to be the median. | 9 |
Third quartile | The rank of the third quartile is 10×(3/4) = 7.5, which rounds up to 8. The eighth value in the population is 15. | 15 |
Fourth quartile | Although not universally accepted, one can also speak of the fourth quartile. This is the maximum value of the set, so the fourth quartile in this example would be 20. Under the Nearest Rank definition of quantile, the rank of the fourth quartile is the rank of the biggest number, so the rank of the fourth quartile would be 10. | 20 |
So the first, second and third 4-quantiles (the "quartiles") of the dataset {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20.
Odd-sized population
Consider an ordered population of 11 data values {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20}. What are the 4-quantiles (the "quartiles") of this dataset?
Quartile | Calculation | Result |
---|---|---|
Zeroth quartile | Although not universally accepted, one can also speak of the zeroth quartile. This is the minimum value of the set, so the zeroth quartile in this example would be 3. | 3 |
First quartile | The first quartile is determined by 11×(1/4) = 2.75, which rounds up to 3, meaning that 3 is the rank in the population (from least to greatest values) at which approximately 1/4 of the values are less than the value of the first quartile. The third value in the population is 7. | 7 |
Second quartile | The second quartile value (same as the median) is determined by 11×(2/4) = 5.5, which rounds up to 6. Therefore 6 is the rank in the population (from least to greatest values) at which approximately 2/4 of the values are less than the value of the second quartile (or median). The sixth value in the population is 9. | 9 |
Third quartile | The third quartile value for the original example above is determined by 11×(3/4) = 8.25, which rounds up to 9. The ninth value in the population is 15. | 15 |
Fourth quartile | Although not universally accepted, one can also speak of the fourth quartile. This is the maximum value of the set, so the fourth quartile in this example would be 20. Under the Nearest Rank definition of quantile, the rank of the fourth quartile is the rank of the biggest number, so the rank of the fourth quartile would be 11. | 20 |
So the first, second and third 4-quantiles (the "quartiles") of the dataset {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20.
Discussion
Standardized test results are commonly misinterpreted as a student scoring "in the 80th percentile," for example, as if the 80th percentile is an interval to score "in," which it is not; one can score "at" some percentile, or between two percentiles, but not "in" some percentile. Perhaps by this example it is meant that the student scores between the 80th and 81st percentiles, or "in" the group of students whose score placed them at the 80th percentile.
If a distribution is symmetric, then the median is the mean (so long as the latter exists). But, in general, the median and the mean differ. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean. This is because the exponential distribution has a long tail for positive values but is zero for negative numbers.
Quantiles are useful measures because they are less susceptible than means to long-tailed distributions and outliers. Empirically, if the data being analyzed are not actually distributed according to an assumed distribution, or if there are other potential sources for outliers that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics.
Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available.
The quantiles of a random variable are preserved under increasing transformations, in the sense that, for example, if <math>m</math> is the median of a random variable <math>X</math>, then <math>2^m</math> is the median of <math>2^X</math>, unless an arbitrary choice has been made from a range of values to specify a particular quantile. (See quantile estimation, below, for examples of such interpolation.) Quantiles can also be used in cases where only ordinal data are available.
Estimating the quantiles of a population
There are several methods for estimating the quantiles.^{[1]} The most comprehensive breadth of methods is available in the R and GNU Octave programming languages, which include nine sample quantile methods.^{[2]}^{[3]} SAS includes five sample quantile methods, SciPy and Maple both include eight,^{[4]}^{[5]} STATA includes two, and Microsoft Excel includes one.
In effect, the methods compute Q_{p}, the estimate for the kth q-quantile, where p = k / q, from a sample of size N by computing a real valued index h. When h is an integer, the hth smallest of the N values, x_{h}, is the quantile estimate. Otherwise a rounding or interpolation scheme is used to compute the quantile estimate from h, x_{⌊h⌋}, and x_{⌈h⌉}. (For notation, see floor and ceiling functions).
Estimate types include:
Type | h | Q_{p} | Notes |
---|---|---|---|
R-1, SAS-3, Maple-1 | <math>Np + 1/2\,</math> | <math>x_{\lceil h\,-\,1/2 \rceil}</math> | Inverse of empirical distribution function. When p = 0, use x_{1}. |
R-2, SAS-5, Maple-2 | <math>Np + 1/2\,</math> | <math>\frac{x_{\lceil h\,-\,1/2 \rceil} + x_{\lfloor h\,+\,1/2 \rfloor}}{2}</math> | The same as R-1, but with averaging at discontinuities. When p = 0, use x_{1}. When p = 1, use x_{N}. |
R-3, SAS-2 | <math>Np\,</math> | <math>x_{\lfloor h \rceil}\,</math> | The observation numbered closest to Np. Here, ⌊ h ⌉ indicates rounding to the nearest integer, choosing the even integer in the case of a tie. When p ≤ (1/2) / N, use x_{1}. |
R-4, SAS-1, SciPy-(0,1), Maple-3 | <math>Np\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | Linear interpolation of the empirical distribution function. When p < 1 / N, use x_{1}. When p = 1, use x_{N}. |
R-5, SciPy-(.5,.5), Maple-4 | <math>Np + 1/2\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | Piecewise linear function where the knots are the values midway through the steps of the empirical distribution function. When p < (1/2) / N, use x_{1}. When p ≥ (N - 1/2) / N, use x_{N}. |
R-6, SAS-4, SciPy-(0,0), Maple-5 | <math>(N+1)p\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | Linear interpolation of the expectations for the order statistics for the uniform distribution on [0,1]. When p < 1 / (N+1), use x_{1}. When p ≥ N / (N + 1), use x_{N}. |
R-7, Excel, SciPy-(1,1), Maple-6 | <math>(N-1)p + 1\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | Linear interpolation of the modes for the order statistics for the uniform distribution on [0,1]. When p = 1, use x_{N}. |
R-8, SciPy-(1/3,1/3), Maple-7 | <math>(N + 1/3)p + 1/3\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | Linear interpolation of the approximate medians for order statistics. When p < (2/3) / (N + 1/3), use x_{1}. When p ≥ (N - 1/3) / (N + 1/3), use x_{N}. |
R-9, SciPy-(3/8,3/8), Maple-8 | <math>(N + 1/4)p + 3/8\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | The resulting quantile estimates are approximately unbiased for the expected order statistics if x is normally distributed. When p < (5/8) / (N + 1/4), use x_{1}. When p ≥ (N - 3/8) / (N + 1/4), use x_{N}. |
<math>(N + 2)p - 1/2\,</math> | <math>x_{\lfloor h \rfloor} + (h - \lfloor h \rfloor) (x_{\lfloor h \rfloor + 1} - x_{\lfloor h \rfloor})</math> | If h were rounded, this would give the order statistic with the least expected square deviation relative to p. When p < (3/2) / (N + 2), use x_{1}. When p ≥ (N + 1/2) / (N + 2), use x_{N}. |
Notes:
- R-1 through R-3 are piecewise constant, with discontinuities.
- R-4 and following are piecewise linear, without discontinuities, but differ in how h is computed.
- R-3 and R-4 are not symmetric in that they do not give h = (N + 1) / 2 when p = 1/2.
The standard error of a quantile estimate can in general be estimated via the bootstrap. The Maritz-Jarrett method can also be used.^{[6]}
See also
- Flashsort – sort by first bucketing by quantile
- Descriptive statistics
- Quartile
- Q-Q plot
- Quantile function
- Quantile normalization
- Quantile regression
- Summary statistics
References
- ^ Hyndman, R.J.; Fan, Y. (November 1996). "Sample Quantiles in Statistical Packages". American Statistician (American Statistical Association) 50 (4): 361–365. JSTOR 2684934. doi:10.2307/2684934.
- ^ Frohne, I.; Hyndman, R.J. (2009). Sample Quantiles. R Project. ISBN 3-900051-07-0.
- ^ "Function Reference: quantile - Octave-Forge - SourceForge". Retrieved 6 September 2013.
- ^ [1]
- ^ http://www.maplesoft.com/support/help/maple/view.aspx?path=Statistics%2FQuantile
- ^ Rand R. Wilcox. Introduction to robust estimation and hypothesis testing. ISBN 0-12-751542-9
Further reading
40x40px | Wikimedia Commons has media related to Quantiles. |
- R.J. Serfling. Approximation Theorems of Mathematical Statistics. John Wiley & Sons, 1980.