Multivariate kernel density estimation
is a nonparametric technique for density estimation i.e., estimation of probability density functions, which is one of the fundamental questions in statistics. It can be viewed as a generalisation of histogram density estimation with improved statistical properties. Apart from histograms, other types of density estimators include parametric, spline, wavelet and Fourier series. Kernel density estimators were first introduced in the scientific literature for univariate data in the 1950s and 1960s and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to multivariate statistics. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to its univariate counterparts.
Motivation
We take an illustrative synthetic bivariate data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point. For the histogram on the left, we choose : for the one on the right, we shift the anchor point by 0.125 in both directions to. Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour-coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive to the placement of the anchor point.One possible solution to this anchor point placement problem is to remove the histogram binning grid completely. In the left figure below, a kernel is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid.
The coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%, thus indicating that a single central region contains the highest density.
The goal of density estimation is to take a finite sample of data and to make inferences about the underlying probability density function everywhere, including where no data are observed. In kernel density estimation, the contribution of each data point is smoothed out from a single point into a region of space surrounding it. Aggregating the individually smoothed contributions gives an overall picture of the structure of the data and its density function. In the details to follow, we show that this approach leads to a reasonable estimate of the underlying density function.
Definition
The previous figure is a graphical representation of kernel density estimate, which we now define in an exact manner. Let x1, x2,..., xn be a sample of d-variate random vectors drawn from a common distribution described by the density function ƒ. The kernel density estimate is defined to bewhere
- , are d-vectors;
- H is the bandwidth d×d matrix which is symmetric and positive definite;
- K is the kernel function which is a symmetric multivariate density;
- .
Optimal bandwidth matrix selection
The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE or mean integrated squared errorThis in general does not possess a closed-form expression, so it is usual to use its asymptotic approximation as a proxy
where
- , with when K is a normal kernel
- ,
- D2ƒ is the d × d Hessian matrix of second order partial derivatives of ƒ
- is a d2 × d2 matrix of integrated fourth order partial derivatives of ƒ
- vec is the vector operator which stacks the columns of a matrix into a single vector e.g.
where o indicates the usual small o notation. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size n → ∞.
It can be shown that any reasonable bandwidth selector H has H = O where the big O notation is applied elementwise. Substituting this into the MISE formula yields that the optimal MISE is O. Thus as n → ∞, the MISE → 0, i.e. the kernel density estimate converges in mean square and thus also in probability to the true density f. These modes of convergence are confirmation of the statement in the motivation section that kernel methods lead to reasonable density estimators. An ideal optimal bandwidth selector is
Since this ideal selector contains the unknown density function ƒ, it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practice: smoothed cross validation and plug-in selectors.
Plug-in
The plug-in estimate of the AMISE is formed by replacing Ψ4 by its estimatorwhere. Thus is the plug-in selector. These references also contain algorithms on optimal estimation of the pilot bandwidth matrix G and establish that converges in probability to HAMISE.
Smoothed cross validation
Smoothed cross validation is a subset of a larger class of cross validation techniques. The SCV estimator differs from the plug-in estimator in the second termThus is the SCV selector.
These references also contain algorithms on optimal estimation of the pilot bandwidth matrix G and establish that converges in probability to HAMISE.
Rule of thumb
Silverman's rule of thumb suggests using where is the standard deviation of the ith variable and. Scott's rule is.Asymptotic analysis
In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on the expected value and the variance of the density estimatorwhere * is the convolution operator between two functions, and
For these two expressions to be well-defined, we require that all elements of H tend to 0 and that n−1 |H|−1/2 tends to 0 as n tends to infinity. Assuming these two conditions, we see that the expected value tends to the true density f i.e. the kernel density estimator is asymptotically unbiased; and that the variance tends to zero. Using the standard mean squared value decomposition
we have that the MSE tends to 0, implying that the kernel density estimator is consistent and hence converges in probability to the true density f. The rate of convergence of the MSE to 0 is the necessarily the same as the MISE rate noted previously O, hence the covergence rate of the density estimator to f is Op where Op denotes order in probability. This establishes pointwise convergence. The functional covergence is established similarly by considering the behaviour of the MISE, and noting that under sufficient regularity, integration does not affect the convergence rates.
For the data-based bandwidth selectors considered, the target is the AMISE bandwidth matrix. We say that a data-based selector converges to the AMISE selector at relative rate Op, α > 0 if
It has been established that the plug-in and smoothed cross validation selectors both converge at a relative rate of Op i.e., both these data-based selectors are consistent estimators.
Density estimation with a full bandwidth matrix
The in R implements the plug-in and smoothed cross validation selectors. This dataset contains272 records with two measurements each: the duration time of an eruption and the
waiting time until the next eruption of the Old Faithful Geyser in Yellowstone National Park, USA.
The code fragment computes the kernel density estimate with the plug-in bandwidth matrix Again, the coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%. To compute the SCV selector,
Hpi
is replaced with Hscv
. This is not displayed here since it is mostly similar to the plug-in estimate for this example.library
data
H <- Hpi
fhat <- kde
plot
points
Density estimation with a diagonal bandwidth matrix
We consider estimating the density of the Gaussian mixturefrom 500 randomly generated points. We employ the Matlab routine for
.
The routine is an automatic bandwidth selection method specifically designed
for a second order Gaussian kernel.
The figure shows the joint density estimate that results from using the automatically selected bandwidth.
Matlab script for the example
Type the following commands in Matlab after
and saving the function kde2d.m
in the current directory.
clear all
% generate synthetic data
data=;
% call the routine, which has been saved in the current directory
=kde2d;
% plot the data and the density estimate
contour3, hold on
plot,data
Alternative optimality criteria
The MISE is the expected integrated L2 distance between the density estimate and the true density function f. It is the most widely used, mostly due to its tractability and most software implement MISE-based bandwidth selectors.There are alternative optimality criteria, which attempt to cover cases where MISE is not an appropriate measure. The equivalent L1 measure, Mean Integrated Absolute Error, is
Its mathematical analysis is considerably more difficult than the MISE ones. In practice, the gain appears not to be significant. The L∞ norm is the Mean Uniform Absolute Error
which has been investigated only briefly. Likelihood error criteria include those based on the Mean Kullback–Leibler divergence
and the Mean Hellinger distance
The KL can be estimated using a cross-validation method, although KL cross-validation selectors can be sub-optimal even if it remains consistent for bounded density functions. MH selectors have been briefly examined in the literature.
All these optimality criteria are distance based measures, and do not always correspond to more intuitive notions of closeness, so more visual criteria have been developed in response to this concern.
Objective and data-driven kernel selection
Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about the form of the distribution. The resulting kernel density estimate converges rapidly to the true probability distribution as samples are added: at a rate close to the expected for parametric estimators. This kernel estimator works for univariate and multivariate samples alike. The optimal kernel is defined in Fourier space—as the optimal damping function -- in terms of the Fourier transform of the data, the empirical characteristic function :where, N is the number of data points, d is the number of dimensions, and is a filter that is equal to 1 for 'accepted frequencies' and 0 otherwise. There are various ways to define this filter function, and a simple one that works for univariate or multivariate samples is called the 'lowest contiguous hypervolume filter'; is chosen such that the only accepted frequencies are a contiguous subset of frequencies surrounding the origin for which .
Note that direct calculation of the empirical characteristic function is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using a non-uniform fast Fourier transform method, which increases the calculation speed by several orders of magnitude. The combination of this objective KDE method and the nuFFT-based ECF approximation has been referred to as in the literature.