I'm using following 2 code blocks to compute matrix multiplication serially and parallel.
Serial -
double** ary1 = new double*[in];
double** ary2 = new double*[in];
double** result = new double*[in];
for (int i=0;i<in;i++){
for (int j=0;j<in;j++){
result[i][j] = 0;
for(int k = 0;k<in; k++){
result[i][j] += ary1[i][k]*ary2[k][j];
}
}
}
Parallel -
double** ary1 = new double*[in];
double** ary2 = new double*[in];
double** resultsP = new double*[in];
#pragma omp parallel for
for(int i=0;i<size;i++){
int raw = i/in;
int column = i%in;
double sum =0;
for(int k = 0; k < in; k++){
resultsP[raw][column] += ary1[raw][k]*ary2[k][column];
}
resultsP[raw][column] = sum;
}
I ran both in quad-core computer, but get same results. Why I don't get performance increased by running this parrellely? Does accessing ary1, ary2, resultsP shared arrays inside parellel loop cause them to run serially?
This has happened since '-fopenmp' flag hasn't included when compiling the code. Problem was solved by adding it.