Error Estimates and Convergence

When we develop a method, specially to get approximate values of a certain quantity, it is desirable to know how much of deviation we have made from the actual to the approximated value. This difference between the exact and the computed value is usually known as the error committed. An estimate for error also indicates how good is the calculated approximate value. In general, such a feature may not be possible. Euler's method is one such method which allows us for an analysis of the error, which is the main aim of this section. Note that we are not dealing with the truncation error in actual calculation. Recall that

  1. $ a = x_0 < x_1 < x_2 < \cdots < x_n = b, \; x_i - x_{i-1} = h, i=0,1,2,\ldots, n-1 $ and
  2. $ y_i$ is the approximate solution of the IVP $ y^\prime = f(x,y), \; y(a) = y_0$ defined by the Euler's method, namely

    $\displaystyle y_i = y_{i-1} + f(x_i, y_{i-1}), \; 1 \le i \le n.$ (14.2.5)

Note here that $ y_i$ is the approximate value of $ y(x_i),$ for $ 1 \le i \le n$ . The quantity $ e_i = \vert y(x_i) - y_i\vert $ is the absolute deviation of $ y_i$ from $ y(x_i)$ for $ 1 \le i \le n$ . So, the $ e_i$ 's are called the absolute error estimates, committed at the $ i^{\mbox{th}}$ step. It is also called the discretization error. In this section, we examine the nature of $ e_i$ . It is desirable that $ e_i$ is ``small" when the step size $ h$ is small. In this connection, we have the following result.

THEOREM 14.2.1   Consider the IVP

$\displaystyle y^\prime = f(x,y), \; y(a) = y_0.$ (14.2.6)

Let the exact solution $ y$ of ([*]) be a twice continuously differentiable function on $ [a, \; b]$ and for $ 1 \le i \le n$ , let $ y_i$ be the approximate solution of ([*]) at $ x_i$ . Further, suppose that there exist positive constants $ L$ and $ M$ such that $ \vert f_y(x,y) \vert \le L \;\; {\mbox{ and }} \;\; \vert y^{\prime\prime}(x) \vert \le M,$ for all $ x \in [a, \; b].$ Then

$\displaystyle \vert e_i\vert \le \frac{hM}{2 L} (e^{i h L} - 1).$ (14.2.7)

Proof. By the mean value theorem, for each $ i, \; 0 \le i \le n-1,$ $ \; y(x_{i+1}) = y(x_i) + h y^\prime(x_i) + \displaystyle \frac{h^2}{2!} y^{\prime\prime}(c)$ for some $ c$ satisfying $ x_i < c < x_{i+1}$ . Also, by Euler's method, $ y_{i+1} = y_i + h f(x_i, y_i)$ . So,

$\displaystyle e_{i+1} = \vert y(x_{i+1}) - y_i\vert \le e_i + h \bigl\vert f(x_...
...(x_i)) - f(x_i, y_i)\bigr\vert + \frac{h^2}{2!} \vert y^{\prime\prime}(c)\vert.$ (14.2.8)

Again, by the mean value theorem, $ \vert f(x_i, y(x_i)) - f(x_i, y_i)\vert = \vert f_y(x_i, d) \bigl( y(x_i) - y_i \bigr)\vert = e_i \; \vert f_y(x_i, d)\vert,$ for some constant $ d$ lying between $ y_i$ and $ y(x_i)$ . Therefore, using the given bounds in the statement of the theorem, the error bound $ e_{i+1}$ (see ([*])) reduces to

$\displaystyle e_{i+1} \le \;\; e_i \; (1 + h L) + \frac{h^2}{2!} M.$ (14.2.9)

Using the principle of mathematical induction, it can be easily verified that if $ g_{i}$ is the solution of the difference equation $ g_{i} = (1 + h L) g_{i-1} + \displaystyle \frac{h^2}{2!} M \; {\mbox{ with }} \; g_0 = 0$ then
  1. $ g_i \ge e_i$ for $ i = 1, 2, \ldots, n$ and
  2. $ g_i = A \bigl((1+ h L)^i - 1\bigr) \le A(e^{ihL} - 1),$ where $ A = \displaystyle \frac{hM}{2L}$ .
Hence, the proof of the theorem is complete. height6pt width 6pt depth 0pt

Remark 14.2.2   Inequality ([*]) implies that the error is in the class $ O(h^2)$ . Theorem [*] also gives an upper bound for the estimate of the error at $ y_i$ . Since the error at one step may induce more error at the next step, the final value $ y_n$ may have a cumulative error that is added in each step. The error at $ y_i$ is called a ``local error"; the cumulative error at $ y_n$ is called the ``global error". Theorem [*] does not throw any light on the estimate of global error.

A K Lal 2007-09-12