Runge-Kutta Method

Runge-Kutta Method is a more general and improvised method as compared to that of the Euler's method. It uses, as we shall see, Taylor's expansion of a ``smooth function" (thereby, we mean that the derivatives exist and are continuous up to certain desired order). Before we proceed further, the following questions may arise in our mind, which has not found place in our discussion so far.

  1. How does one choose the starting values, sometimes called starters that are required for implementing an algorithm?
  2. Is it desirable to change the step size (or the length of the interval) $ y$ during the computation if the error estimates demands a change as a function of $ h$ ?
For the present, the discussion about Question [*] is not taken up. We try to look more on Question [*] in the ensuing discussion. There are many self-starter methods, like the Euler method which uses the initial condition. But these methods are normally not very efficient since the error bounds may not be ``good enough". We have seen in Theorem [*] that the local error (neglecting the rounding-off error) is $ (O(h^2)$ in the Euler's algorithm. This shows that as the values of $ h$ become smaller, the approximations improve. Moreover, the error of order $ O(h)$ , may not be sufficiently accurate for many problems. So, we look into a few methods where the error is of higher order. They are Runge-Kutta (in short R-K) methods. Let us analyze how the algorithm is reacher before we actually state it. To do so, we consider the IVP

$\displaystyle y^\prime = f(x, y), \; y(a) =y_0, \; a \in [a, \; b].$

Define $ x_k = x_0 + k h, \; k=0,1,2, \ldots, n$ with $ x_0 = a, \; x_n = b$ and $ h = \displaystyle \frac{b-a}{n}$ . We now assume that $ y$ and $ f$ are smooth. Using Taylor's series, we now have

$\displaystyle y(x_{k+1}) = y(x_k) + h y^\prime(x_k) + \frac{h^2}{2!} y^{\prime\prime}(x_k) +  \frac{h^3}{3!} y^{\prime\prime\prime}(x_k) + \cdots.$ (14.3.10)

For $ k = 0,1,2,\ldots, n-1$ consider the expression

$\displaystyle y_{k+1} = y_k + p k_1 + q k_2, {\mbox{ where }} \; k_1 = h f(x_k, y_k), \; \;  k_2 = h f(x_k +\alpha h, y_k + \beta k_1)$ (14.3.11)

and $ p, q, \alpha, \beta$ are constants. When $ q = 0$ , ([*]) reduces to the Euler's algorithm. We choose $ p, q, \alpha$ and $ \beta$ so that the local truncation error is $ O(h^3)$ . From the definition of $ k_2$ , we have

$\displaystyle \frac{k_2}{h} = f(x_k, y_k) + \alpha h f_x + \beta k_1 f_y + \fra...
...} f_{xx} +
\alpha \beta h k_1 f_{xy} + \frac{\beta^2 k_1^2}{2} f_{yy} + )(h^3)$

where $ f_x, f_y, \ldots$ denote the partial derivatives of $ f$ with respect to $ x, y, \ldots, $ respectively. Substituting these values in ([*]), we have

$\displaystyle y_{k+1} = y_k + h(p+q) f + q h^2 (\alpha f_x + \beta f f_y) + q h...
...} f_{xx} +  \alpha \beta f f_{xy} + \frac{p^2}{2} f^2 f_{yy} \right) + O(h^4).$ (14.3.12)

A comparision of ([*]) and ([*]), leads to the choice of

$\displaystyle p+q = 1, \;\; q(\alpha f_x + \beta f_y) = \frac{y^{\prime\prime}}{2}$ (14.3.13)

in order that the powers of $ h$ up to $ h^2$ match (in some sense) in the approximate values of $ y_{k+i}$ . Here we note that $ y^{\prime\prime} = f_x + f f_y$ . So, we choose $ \alpha, \beta, p$ and $ q$ so that ([*]) is satisfied. One of the simplest solution is

$\displaystyle p = q = \frac{1}{2} \;\; {\mbox{ and }} \alpha = \beta = 1.$

Thus we are lead to define

$\displaystyle y_{k+1} = y_k + \frac{h}{2} \bigl( f(x_k, y_k) + f(x_k + h, y_k + h f(x_k)) \bigr).$ (14.3.14)

Evaluation of $ y_{k+1}$ by ([*]) is called the Runge-Kutta method of order $ 2$ (R-K method of order $ 2$ ).

A few things are worthwhile to be noted in the above discussion. Firstly, we need the existence of partial derivatives of $ f$ up to order $ 3$ for R-K method of order $ 2$ . For higher order methods, we need $ f$ to be more smooth. Secondly, we note that the local truncation error (in R-K method of order $ 2$ ) is of order $ O(h^3)$ . Again, we remind the readers that the round-off error in the case of implementation has not been considered. Also, in ([*]), the partial derivatives of $ f$ do not appear. In short, we are likely to get a better accuracy in Runge-Kutta method of order $ 2$ in comparision with the Euler's method. Formally, we state the Runge-Kutta method of order $ 2$ .



Subsections
A K Lal 2007-09-12