Frequent Links
Spectral density estimation
It has been suggested that Periodogram be merged into this article. (Discuss) Proposed since September 2013. 
In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density (also known as the power spectral density) of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
SDE should be distinguished from the field of frequency estimation, which assumes that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seeks to find the location and intensity of the generated frequencies. SDE makes no assumption on the number of components and seeks to estimate the whole generating spectrum.
Contents
Techniques
Techniques for spectrum estimation can generally be divided into parametric and nonparametric methods. The parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an autoregressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, nonparametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure.
Following is a partial list of spectral density estimation techniques:
 Periodogram the basic modulussquared of the Fourier transform
 Bartlett's method is a periodogram spectral estimate formed by averaging the discrete Fourier transform of multiple segments of the signal to reduce variance of the spectral density estimate
 Welch's method a windowed version of Bartlett's method that uses overlapping segments
 BlackmanTukey is a periodogram smoothing technique to reduce variance of the spectral density estimate
 Autoregressive moving average estimation, based on fitting to an ARMA model to the time series of signal samples; in addition to ARMA, there are separate moving average and autoregressive methods
 Multitaper is a periodogrambased method that uses multiple tapers, or windows, to form independent estimates of the spectral density to reduce variance of the spectral density estimate
 Maximum entropy spectral estimation is an allpoles method useful for SDE when singular spectral features, such as sharp peaks, are expected.
 Leastsquares spectral analysis, based on least squares fitting to known frequencies
 Nonuniform discrete Fourier transform is used when the signal samples are unevenly spaced in time
 Singular spectrum analysis is a nonparametric method that uses a singular value decomposition of the covariance matrix to estimate the spectral density
 Shorttime Fourier transform
Parametric estimation
In parametric spectral estimation, one assumes that the signal is modeled by a stationary process which has a spectral density function (SDF) <math>S(f;a_1,\ldots,a_p)</math> that is a function of the frequency <math>f</math> and <math>p</math> parameters <math>a_1,\ldots,a_p</math>.^{[1]} The estimation problem then becomes one of estimating these parameters.
The most common form of parametric SDF estimate uses as a model an autoregressive model <math>AR(p)</math> of order <math>p</math>.^{[1]}^{:392} A signal sequence <math>\{Y_t\}</math> obeying a zero mean <math>AR(p)</math> process satisfies the equation
 <math>Y_t = \phi_1Y_{t1} + \phi_2Y_{t2} + \cdots + \phi_pY_{tp} + \epsilon_t,</math>
where the <math>\phi_1,\ldots,\phi_p</math> are fixed coefficients and <math>\epsilon_t</math> is a white noise process with zero mean and innovation variance <math>\sigma^2_p</math>. The SDF for this process is
 <math>S(f;\phi_1,\ldots,\phi_p,\sigma^2_p)
= \frac{\sigma^2_p\Delta t}{\left 1  \sum_{k=1}^p \phi_k e^{2i\pi f k \Delta t}\right^2} \qquad f < f_N,</math>
with <math>\Delta t</math> the sampling time interval and <math>f_N</math> the Nyquist frequency.
There are a number of approaches to estimating the parameters <math>\phi_1,\ldots,\phi_p,\sigma^2_p</math> of the <math>AR(p)</math> process and thus the spectral density:^{[1]}^{:452453}
 The YuleWalker estimators are found by recursively solving the YuleWalker equations for an <math>AR(p)</math> process
 The Burg estimators are found by treating the YuleWalker equations as a form of ordinary least squares problem. The Burg estimators are generally considered superior to the YuleWalker estimators.^{[1]}^{:452} Burg associated these with maximum entropy spectral estimation.^{[2]}
 The forwardbackward leastsquares estimators treat the <math>AR(p)</math> process as a regression problem and solves that problem using forwardbackward method. They are competitive with the Burg estimators.
 The maximum likelihood estimators assume the white noise is a Gaussian process and estimates the parameters using a maximum likelihood approach. This involves a nonlinear optimization and is more complex than the first three.
Alternative parametric methods include fitting to a moving average model (MA) and to a full autoregressive moving average model (ARMA).
Frequency estimation
Frequency estimation is the process of estimating the complex frequency components of a signal in the presence of noise given assumptions about the number of the components.^{[3]} This contrasts with the general methods above, which do not make prior assumptions about the components.
Finite number of tones
A typical model for a signal <math>x(n)</math> consists of a sum of <math>p</math> complex exponentials in the presence of white noise, <math>w(n)</math>
 <math>x(n) = \sum_{i=1}^p A_i e^{j n \omega_i} + w(n)</math>.
The power spectral density of <math>x(n)</math> is composed of <math>p</math> impulse functions in addition to the spectral density function due to noise.
The most common methods for frequency estimation involve identifying the noise subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation are Pisarenko's method, the multiple signal classification (MUSIC) method, the eigenvector method, and the minimum norm method.
 <math>\hat P_{PHD}(e^{j \omega}) = \frac{1}{\mathbf{e}^{H}\mathbf{v}_{min}^2}
</math>
 <math>\hat P_{MU}(e^{j \omega}) = \frac{1}{\sum_{i=p+1}^{M} \mathbf{e}^{H} \mathbf{v}_i^2}</math>,
Eigenvector method
 <math>\hat P_{EV}(e^{j \omega}) = \frac{1}{\sum_{i=p+1}^{M}\frac{1}{\lambda_i} \mathbf{e}^H \mathbf{v}_i^2}</math>
Minimum norm method
 <math>\hat P_{MN}(e^{j \omega}) = \frac{1}{\mathbf{e}^H \mathbf{a}^2} ; \mathbf{a} = \lambda \mathbf{P}_n \mathbf{u}_1</math>
Single tone
If one only wants to estimate the single loudest frequency, one can use a pitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency estimation include those based on the WignerVille distribution and higher order ambiguity functions.^{[4]}
If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a discrete Fourier transform or some other Fourierrelated transform.
See also
References
 ^ ^{a} ^{b} ^{c} ^{d} Donald B. Percival and Andrew T. Walden (1992). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 9780521435413.
 ^ Burg, J.P. (1967) "Maximum Entropy Spectral Analysis", Proceedings of the 37th Meeting of the Society of Exploration Geophysicists, Oklahoma City, Oklahoma.
 ^ Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0471594318.
 ^ Lerga, Jonatan. "Overview of Signal Instantaneous Frequency Estimation Methods" (PDF). University of Rijeka. Retrieved 22 March 2014.
Further reading
 Porat, B. (1994). Digital Processing of Random Signals: Theory & Methods. Prentice Hall. ISBN 0130637513.
 Priestley, M.B. (1991). Spectral Analysis and Time Series. Academic Press. ISBN 0125649223.
 P Stoica and R Moses, Spectral Analysis of Signals. Prentice Hall, NJ, 2005 (Chinese Edition, 2007). AVAILABLE FOR DOWNLOAD.
 Thomson, D. J. (1982). "Spectrum estimation and harmonic analysis". Proceedings of the IEEE 70 (9): 1055. doi:10.1109/PROC.1982.12433.
