Right now I have the following function to evaluate the Gaussian density:
double densities::evalMultivNorm(const Eigen::VectorXd &x, const Eigen::VectorXd &meanVec, const Eigen::MatrixXd &covMat)
{
double inv_sqrt_2pi = 0.3989422804014327;
double quadform = (x - meanVec).transpose() * covMat.inverse() * (x-meanVec);
double normConst = pow(inv_sqrt_2pi, covMat.rows()) * pow(covMat.determinant(), -.5);
return normConst * exp(-.5* quadform);
}
This is just transcribing the formula. However I get a lot of 0, nans and infs. I suspect it's coming from the covMat.determinant()
portion being very close to zero.
I have heard that it is more "stable" to premultiply x-meanVec
by the inverse of a "square root" of its covariance matrix. Statistically this gives you a random vector that is mean zero and has the identity matrix as its covariance matrix. My questions are:
- Is this truly the best way to go?
- Which is the "best" square root technique, and
- How do I do it? (Preferably using Eigen)
Ad 1: "Depends". E.g., if your covariance matrix has a special structure which makes it easy to calculate its inverse or if the dimension is very small, it can be faster and more stable to actually calculate the inverse.
Ad 2: Usually, Cholesky decomposition does the job. If your covariance is truly positive definite (i.e., not to close to a semidefinite matrix), decompose
covMat = L*L^T
and computesquaredNorm(L\(x-mu))
(wherex=A\b
means "SolveA*x=b
forx
"). Of course, if your covariance is fixed, you should computeL
only once (and perhaps invert it as well). You should useL
to computesqrt(covMat.determinant())
as well, since computing the determinant would otherwise require to decomposecovMat
again. Minor improvement: instead ofpow(inv_sqrt_2pi, covMat.rows())
computelogSqrt2Pi=log(sqrt(2*pi))
and then returnexp(-0.5*quadform - covMat.rows()*logSqrt2Pi) / L.determinant()
.Ad 3: This should run in Eigen 3.2 or later: