step doubling Runge Kutta implementation stuck shrinking stepsize to machine precision

225 Views Asked by At

I need to integrate a system of ODES using an adaptive RK4 method with stepsize control via step doubling techniques.

The problem is that the program continues forever shrinking the stepsize down to machine precision while not advancing time.

the idea is to step the solution once by a single step and also by two successive half steps, compare the result as their difference and store it in eps. So eps is a measure of the error. Now I want to determine the next step stepsize according to whether eps is greater to a specified accuracy eps0 (as described in the book "Numerical Recipes")

RK4Step(double t, double* Y, double *Yout, void (*RHSFunc)(double, double *, double *),double h) steps the solution vector Y by h and puts the result into Yout using the function RHSFunc.

#define NEQ 4 //problem dimension

int main(int argc, char* argv[])
{
  ofstream frames("./frames.dat");
  ofstream graphs("./graphs.dat");
                                       
  double Y[4]  = {2.0, 2.0, 1.0, 0.0}; //initial conditions for solution vector
  double finaltime = 100;  //end of integration
  double eps0   =  10e-5;  //error to compare with eps

  double t = 0.0;
  double step = 0.01;
  while(t < finaltime)
  {
    double eps = 0.0;
    double Y1[4], Y2[4]; //Y1 will store   half step solution
                         //Y2 will store double step solution
    double dt = step;    //cache current stepsize
    for(;;)
    {
      //make a step starting from state stored in Y and
      //put solution into Y1. Then from Y1 make another half step
      //and store into Y1.
      RK4Step(t,     Y,  Y1, RHS,   step);  //two half steps 
      RK4Step(t+step,Y1, Y1, RHS,   step);
      RK4Step(t,     Y,  Y2, RHS, 2*step);  //one long step

      //compute eps as maximum of differences between Y1 and Y2
      //(an alternative would be quadrature sums)
      for(int i=0; i<NEQ; i++)
        eps=max(eps, fabs( (Y1[i]-Y2[i])/15.0 ) );

      //if error is within tolerance we grow stepsize
      //and advance time 
      if(eps < eps0) 
      { 
        //stepsize is accepted, grow stepsize
        //save solution from Y1 into Y,
        //advance time by the previous (cached) stepsize
        Y[0] = Y1[0]; Y[1] = Y1[1];
        Y[2] = Y1[2]; Y[3] = Y1[3];
        step = 0.9*step*pow(eps0/eps, 0.20); //(0.9 is the safety factor)
        t+=dt;
        break;
      }
      //if the error is too big we shrink stepsize
      step = 0.9*step*pow(eps0/eps, 0.25);
    }


  }

  frames.close();
  graphs.close();
  return 0;


}
1

There are 1 best solutions below

2
On

You never reset eps in the inner loop. This could be the direct cause of your problem. While the actual error reduces with ever decreasing step sizes, the maximum in eps stays constant, and above eps0. This results in a constant reducing factor in the step size update, without any chance to break the loop.


Another "wrong thing" is that the error estimate and tolerance are incompatible. The error tolerance eps0 is an error density or unit-step error. To bring your error estimate eps into that format you need to divide eps by step. Or put another way, currently you are forcing the actual step error to be close to 0.5*eps0, so that the global error is 0.5*eps0 times the number of steps taken, with the number of steps loosely proportional to eps0^0.2. In the version using the unit-step error, the local error is forced to be "dynamically" close to 0.5*eps0*step, so that the global error is about 5*eps0 times the length of the integration interval. I'd say that the second variant is more in line with intuition about the expected behavior.

This is not a critical error, but may lead to sub-optimal step sizes and an actual global error that deviates non-trivially from the desired error tolerance.


You also have a coding inconsistency as in the propagation of the state and declaration of state vectors you have hard-coded 4 components in the state vector, while in the error computation you have a loop over a variable number NEQ of equations and components. As you are using C++, you could use a state vector class that handles all dimension-dependent loops internally. (If done too far, frequent allocation of instances with a short life span could be an efficiency issue.)