I use Eigen3.4 and mkl to solve the sparse linear system.
Previously I was able to get a correct solution using Eigen::SimplicialLDLT to solve II\EE, but it took long time. II is a sparse matrix which calculated by J^T * J and EE is a dense vector.J is a jacobian matrix which dimension is 175215 * 175215 and EE is a vector which dimension is 175215 * 1. And J is invertible, the visualization of HH is shown in II .
For my case, Eigen::SimplicialLDLT uses 14s while MATLAB took only 2.5s. Therefore, I try to use Eigen::Pardiso to solve this system. Using this method is really fast, but yields an incorrect solution which is very close to zero. This method seems to have very many parameters that can be set, does it help me to get a correct solution? Or is there another more efficient way to solve this problem?
I have used almost all the methods provided by Eigen, including its own solver and some external solvers such as SuiteSparse but the results are not good, either very slow or incorrect. I want to get a correct solution and the time consumption is similar to MATLAB.
Below is my code:
Eigen::SparseMatrix<double> U = JPSlice.transpose() * JPSlice ;
Eigen::SparseMatrix<double> V = JDRow.transpose() * JDRow + WeightHH;
Eigen::SparseMatrix<double> W = JPSlice.transpose() * JDRow;
Eigen::VectorXd VecErrorS = Eigen::VectorXd::Map(ErrorS.data(), ErrorS.size());
Eigen::VectorXd EP = -JPSlice.transpose() * VecErrorS;
Eigen::VectorXd ED = -JDRow.transpose() * VecErrorS;
Eigen::ArrayXi RowIdSelectVar = IdSelectVar.col(0);
Eigen::ArrayXi ColIdSelectVar = IdSelectVar.col(1);
// Sort the order to variables order
Eigen::ArrayXi ArraySortSelectId = RowIdSelectVar * ValParam.Sizej + ColIdSelectVar;
std::sort(ArraySortSelectId.data(), ArraySortSelectId.data() + ArraySortSelectId.size());
Eigen::ArrayXd SortSelectMap = SelectMap.transpose().reshaped(ValParam.Sizei*ValParam.Sizej,1);
Eigen::VectorXd XH0 = SortSelectMap(ArraySortSelectId);
Eigen::VectorXd EH = -WeightHH * XH0;
Eigen::VectorXd EDEH = ED + EH;
Eigen::SparseMatrix<double> UW = igl::cat(2, U, W);
Eigen::SparseMatrix<double> WT = W.transpose();
Eigen::SparseMatrix<double> WV = igl::cat(2, WT, V);
Eigen::SparseMatrix<double> II = igl::cat(1, UW, WV);
Eigen::VectorXd EE = igl::cat(1,EP,EDEH);
II.makeCompressed();
Eigen::initParallel();
Eigen::ConjugateGradient<Eigen::SparseMatrix<double>> solver;// or other method
solver.compute(II);
Eigen::VectorXd Delta = solver.solve(EE);
How large is your matrix? If it is large, you may be better off using an iterative solver. Direct solvers have high complexity and are best suited to when you expect to solve many linear systems with the same matrix and different RHS vectors (time-dependent linear PDEs, for instance).
If J is square, then J^TJ is definite positive if J is invertible (in turn, if J is not invertible, A is not invertible). The Conjugate Gradient method is tailored to that case and should be very fast. If J is rectangular MxN with M < N (undetermined system), the columns are not free thus the kernel of J is not reduced to {0}. Thus J^TJ would not be definite positive. You can still try CG, though, but verify the results. If J is rectangular with M > N, you may be trying to minimize a norm ||Jx-b||, with EE = J^Tb, and Eigen offers Eigen::LeastSquaresConjugateGradient to solve that without constructing J^TJ.
You can try pre-conditioning to speed up iterative schemes (and improve stability). Most likely Eigen offers that out of the box. Otherwise, an extremely simple scheme is to precondition by the absolute value of the diagonal.
You can try GMRES too which is generally quite robust. In particular if you have doubts about definite positiveness.
As a final note, you may have better luck over at scicomp.