Clearly, there is a trade-off between accuracy and complexity of
calculation which depends heavily on the chosen value for *h*. In general as *h*
is decreased the calculation takes longer but is more accurate. However, if *h* is
decreased too much the slight rounding that occurs in the computer (because it cannot
represent real numbers exactly) begins to accumulate enough to cause significant errors.
For many higher order systems, it is very difficult to make the Euler approximation
effective. For this reason more accurate, and more elaborate techniques were developed. We
will discuss those methods developed by two mathematicians, Runge and Kutta, near the
beginning of the 20th century. The techniques, aptly, are called the Runge-Kutta methods.

The 2^{nd} order Runge-Kutta (RK2) method
is most easily understood as a refinement of the Euler method. It is shown graphically
below.

This technique works by using the Euler method to use the derivative of
y(t) at t_{o} (which we called k1). We use k1 to get an initial estimate for
y(t_{o}+h) labelled y*(t_{o}+h) . From y*(t_{o}+h) we can get an
estimate for the derivative of y(t) at t_{o}+h, which we will call k2. We then use
the average of these two derivatives, k3, to arrive at our final estimate of y(t_{o}+h)
labelled y*'(t_{o}+h). This is called a second order method because the results
turns out to match the Taylor series to two terms. There is also a 4th order method
which is the one that Matlab uses..

The RK2 method acts to increase the accuracy by getting a more accurate
estimate for the slope throughout the interval. The Euler method uses the derivative at
y(t_{o}) to generate an approximated value of y(t_{o}+*h*).
Looking at the figure you can see that using k_{1} tends to
overestimate the value of y(t_{o}+*h*). Using the value of the calculated
derivative at t_{o}+*h*, k_{2}, tends to
underestimate the value of y(t_{o}+*h*). A function that is concave upward
would do the opposite. By averaging the two slopes we get an estimate for y(t_{o}+*h*)
that is better than either estimate alone - this is the heart of all of the Runge-Kutta
methods.

Algorithmically we can describe each step of RK2 this way.

1)Starting at time t_{o}, let t_{o}=t_{o}.

2)At time t_{o}, find the derivative of y(t) and call this k_{1 }(eq. 1).

3)Find an initial value for y*(t_{o}+h) using the Euler formula (eq.2)

4)From y*(t_{o}+h) estimate the derivative of y(t) at t_{o}+h and call this k_{2}(eq. 1).

5)Get a new value for y*'(t_{o}+h) based on the average of the values of k_{1}and k_{2}.

6)Let y(t_{o})=y*'(t_{o}+h) and t_{o}=t_{o}+h .

7)Repeat steps 2 through 6 until finished.

Now lets go through the same example that we used with the Euler method:

## First Time Step

Let's use h=0.1 To find y*(t

_{o}+h)=y*(h) (since t_{o}=0) we first evaluate the derivative (step 2, from above):We then find the approximate solution after the time step (step 3, from above):

We then use this to find the value of k2 (step 4, from above):

Use k1 and k2 to find y*'(t

_{o}+h) (step 5, from above)Our approximate solution is 1.04, the exact solution is 1.04. Note that this is much better than the approximation of 1.1 obtained with the Euler method

The procedure is repeated for subsequent time steps.

The table below show the results of the second order Runge-Kutta approximations, as well as the exact solution up to 4 seconds. You can see better that the Runge-Kutta algorithm is even more accurate at h=0.1 than the Euler method at h=0.01. Here is a link to Matlab code, and C code, that performs the approximation.

t=0.000, y(t)=1.0000, y'*(t)=1.0000 t=0.100, y(t)=1.0413, y'*(t)=1.0405 t=0.200, y(t)=1.0018, y'*(t)=1.0011 t=0.300, y(t)=0.9202, y'*(t)=0.9200 t=0.400, y(t)=0.8205, y'*(t)=0.8208 t=0.500, y(t)=0.7167, y'*(t)=0.7176 . . . t=3.900, y(t)=0.0010, y'*(t)=0.0011 t=4.000, y(t)=0.0008, y'*(t)=0.0009

The Runge-Kutta method is easily adapted to higher order differential equations in exactly the same way as the Euler methods.

Higher order Runge-Kutta methods exist, with the most commonly used being the 4th order Runge-Kutta. These higher order methods match more terms in the Taylor series approximation by taking a weigthed average of several derivative approximations (the second order Runge-Kutta uses 2 approximations, k1 and k2). Matlab uses a fourth order Runge-Kutta, which also varies the step size h. The value of h is increased where possible, to speed the calculation, and decreased where needed to maintain accuracy.

←

Comments or Questions?