I'm working with OpenMP to parallelize a scalar nested for loop:
double P[N][N];
double x=0.0,y=0.0;
for (int i=0; i<N; i++)
{
for (int j=0; j<N; j++)
{
P[i][j]=someLongFunction(x,y);
y+=1;
}
x+=1;
}
In this loop the important thing is that matrix P must be the same in both scalar and parallel versions:
All my possible trials didn't succeed...
The problem here is that you have added iteration-to-iteration dependencies with:
Therefore, as the code stands right now, it is not parallelizable. Attempting to do so will result in incorrect results. (as you are probably seeing)
Fortunately, in your case, you can directly compute them without introducing this dependency:
Now you can try throwing an OpenMP pragma over this and see if it works: