Error Maximization

As any symmetric matrix, a covariance matrix can be decomposed into a set of eigenvalues and a set of eigenvectors. Therefore, the estimation of a covariance matrix can be decomposed into the estimation of the eigenvalues and the estimation of the eigenvectors.

The standard estimator of the covariance matrix is the sample covariance matrix. It can be decomposed into the sample eigenvalues and the sample eigenvectors.

The difficulty with estimating large-dimensional covariance matrices is that the smallest eigenvalues of the sample covariance matrix are biased downwards, and the largest ones upwards.

The eigenvalues of the covariance matrix correspond to the variances (or risks) of certain linear combinations of the individual variables. Therefore, according to the sample covariance matrix, some linear combinations appear to have less risk than they actually do, while for other linear combinations it’s just the opposite.

The term `error maximization’ refers to erroneously latching on to the linear combinations that have low risk according to the sample covariance (while shying away from the linear combinations that have high risk according to the sample covariance matrix).

The linear combination of variables that appears to have the lowest variance according to the sample covariance matrix does not have the lowest variance in reality. The process of trying to minimize variance ends up maximizing the impact of estimation error in the covariance matrix.

The higher the concentration ratio, the more severe the problem of error maximization.

Next