The topic of heteroscedasticity-consistentstandard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as Eicker–Huber–White standard errors, to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White. In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances ui have the same variance across all observation points. When this is not the case, the errors are said to be heteroscedastic, or to have heteroscedasticity, and this behaviour will be reflected in the residuals estimated from a fitted model. Heteroscedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroscedastic residuals. The first such approach was proposed by Huber, and further improved procedures have been produced since for cross-sectional data, time-series data and GARCH estimation. Heteroscedasticity-consistent standard errors that differ from classical standard errors is an indicator of model misspecification. This misspecification is not fixed by merely replacing the classical with heteroscedasticity-consistent standard errors; for all but a few quantities of interest, the misspecification may lead to bias. In most situations, the problem should be found and fixed. Other types of standard error adjustments, such as clustered standard errors, may be considered as extensions to HC standard errors.
History
Heteroscedasticity-consistent standard errors are introduced by Friedhelm Eicker, and popularized in econometrics by Halbert White.
Problem
Assume that we are studying the linear regression model where X is the vector of explanatory variables and β is a k × 1 column vector of parameters to be estimated. The ordinary least squaresestimator is where denotes the matrix of stacked values observed in the data. If the sample errors have equal variance σ2 and are uncorrelated, then the least-squares estimate of β is BLUE, and its variance is easily estimated with where are the regression residuals. When the assumptions of are violated, the OLS estimator loses its desirable properties. Indeed, where While the OLS point estimator remains unbiased, it is not "best" in the sense of having minimum mean square error, and the OLS variance estimator does not provide a consistent estimate of the variance of the OLS estimates. For any non-linear model, however, heteroscedasticity has more severe consequences: the maximum likelihood estimates of the parameters will be biased, as well as inconsistent. As pointed out by Greene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption.”
Solution
If the regression errors are independent, but have distinct variances σi2, then which can be estimated with. This provides White's estimator, often referred to as HCE : where as above denotes the matrix of stacked values from the data. The estimator can be derived in terms of the generalized method of moments. Note that also often discussed in the literature is the covariance matrix of the -consistent limiting distribution: where and Thus, and Precisely which covariance matrix is of concern is a matter of context. Alternative estimators have been proposed in MacKinnon & White that correct for unequal variances of regression residuals due to different leverage. Unlike the asymptotic White's estimator, their estimators are unbiased when the data are homoscedastic.
Software
EViews: EViews version 8 offers three different methods for robust least squares: M-estimation, S-estimation, and MM-estimation.
MATLAB: See the hac function in the Econometrics toolbox.
Python: The Statsmodel package offers various robust standard error estimates, see for further descriptions
R: the vcovHC command from the sandwich package.
RATS: robusterrors option is available in many of the regression and optimization commands.
Stata: robust option applicable in many pseudo-likelihood based procedures.
Gretl: the option --robust to several estimation commands in the context of a cross-sectional dataset produces robust standard errors.