Next: Computer Arithmetic. Up : Main Previous:Computer Representation of Numbers

Numerical Errors:

Numerical errors arise during computations due to round-off errors and truncation errors.

Round-off Errors:

Round-off error occurs because computers use fixed number of bits and hence fixed number of binary digits to represent numbers. In a numerical computation round-off errors are introduced at every stage of computation. Hence though an individual round-off error due to a given number at a given numerical step may be small but the cumulative effect can be significant.

When the number of bits required for representing a number are less then the number is usually rounded to fit the available number of bits. This is done either by chopping or by symmetric rounding.

Chopping: Rounding a number by chopping amounts to dropping the extra digits. Here the given number is truncated. Suppose that we are using a computer with a fixed word length of four digits. Then the truncated representation of the number will be . The digits will be dropped. Now to evaluate the error due to chopping let us consider the normalized representation of the given number i.e.

$ x=72.32451=0.7232451\times10^{2 }$

$ \qquad\qquad\qquad=(0.7232+0.0000451)\times10^{2}$

$ \qquad\qquad\qquad=(0.7232 + 0.451\times10^{-4})\times10^{2}$

chopping error in representing $ x=0.451\times10^{2
-4}$.

So in general if a number is the true value of a given number and $ f_{x}\times10^{E}$ is the normalized form of the rounded (chopped) number $ 'x'$ and $ g_{x}\times10^{E-d}$ is the normalized form of the chopping error then

$ x=f_{x}\times10^{E}+g_{x}\times 10^{E-d}$

Since $ 0\leq g_{x}\leq 1$, the chopping error $ \leq 10^{E-d}$

Symmetric Round-off Error :

In the symmetric round-off method the last retained significant digit is rounded up by 1 if the first discarded digit is greater or equal to 5.In other words, if $ g_{x}$ in $ (*)$ is such that $ \vert g_{x}\vert\geq 0.5$ then the last digit in $ f_{x}$ is raised by 1 before chopping $ g_{x}\times10^{E-d}$. For example let $ x=72.918671,\ \ y=18.63421$ be two given numbers to be rounded to five digit numbers. The normalized form x and y are $ 0.7291867\times10^{2}$ and $ 0.1863421\times10^{2}$. On rounding these numbers to five digits we get $ 0.72919\times10^{2}$ and $ 0.18634\times10^{2}$ respectively. Now w.r.t $ (*)$ here

$ \vert rounding \quad error\vert=\vert g_{x}\vert\times10^{E-d} \quad if \quad
g_{x}<0.5$

$ \qquad\qquad\qquad=\vert(g_{x}-1)\vert\times10^{E-d} \quad if \quad g_{x}
\geq 0.5$

In either case error $ \leq 0.5\times10^{E-d}$.

Truncation Errors:

Often an approximation is used in place of an exact mathematical procedure. For instance consider the Taylor series expansion of say $ sin x $ i.e.

$ sin x=x-\frac{x^{3}}{3!}+\frac{x^{5}}{5!}-\frac{x^{7}}{7!}+...$

Practically we cannot use all of the infinite number of terms in the series for computing the sine of angle x. We usually terminate the process after a certain number of terms. The error that results due to such a termination or truncation is called as 'truncation error'.

Usually in evaluating logarithms, exponentials, trigonometric functions, hyperbolic functions etc. an infinite series of the form $ S=\sum\limits^{\infty}_{i=0}a_{i}x^{i}$ is replaced by a finite series $ \sum\limits^{n}_{i=0}a_{i}x_{i}$. Thus a truncation error of $ \sum\limits^{\infty}_{i=n+1} a_{i}x^{i}$ is introduced in the computation.

For example let us consider evaluation of exponential function using first three terms at

$ e^{x}=1+x+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+\frac{x^{4}}{4!}+\frac{x^{5}}{5!}+\frac{x^{6}}{6!}+...$

$ e^{x}\simeq 1+x+\frac{x^{2}}{2!}$

$ e^{0.2}\simeq 1+0.2+\frac{0.04}{2}=1.22$

Truncation Error $ =\sum\limits^{\infty}_{i=3}\frac{x^{i}}{i!}=\frac{x^{3}}{3!}+\frac{x^{4}}{4!}+\frac{x^{5}}{5!}+\frac{x^{6}}{6!}+...$

$ =\frac{0.008}{6}+\frac{0.0016}{24}+...$

$ =0.0013\overline{3}+0.00006\overline{6}+....$

$ =0.13\overline{3}\times10^{-2}+0.006\overline{6}\times10^{-2}...$

% latex2html id marker 617
$ \therefore Truncation Error\leq 10^{-2}$

Some Fundamental definitions of Error Analysis:

Absolute and Relative Errors:

Absolute Error: Suppose that $ x_{t}$ and $ x_{a}$ denote the true and approximate values of a datum then the error incurred on approximating $ x_{t}$ by $ x_{a}$ is given by

$ e=x_{t}-x_{a}$

and the absolute error $ e_{a}$ i.e. magnitude of the error is given by

$ e_{a}=\vert x_{t}-x_{a}\vert$

Relative Error: Relative Error or normalized error $ e_{r}$ in representing a true datum $ x_{t}$ by an approximate value $ x_{a}$ is defined by

$ \displaystyle{e_{r}=\frac{absolute\quad error }{\vert true\quad
value\vert}=\...
...t x_{t}-x_{a}\vert}{\vert x_{t}\vert}=\big\vert 1-\frac{x_{a}}{x_{t}}\big\vert}$

and $ \displaystyle{e_{r}\% = e_{r}\times100}.$

Sometimes $ e_{r}$ is defined by

$ e_{r}=\vert\frac{x_{t}-x_{a}}{x_{a}}\vert=\vert 1-\frac{x_{t}}{x_{a}}\vert$

$ Eg:$ If $ x_{t}=1434$ and $ x_{a}=1464$ then

$ e=x_{t}-x_{a}=1434-1464=-30$

$ e_{a}=\vert x_{t}-x_{a}\vert=\vert-30\vert=30$

$ e_{r}=\frac{e_{a}}{x_{t}}=\frac{30}{1434}\simeq 0.020921$

$ e_{r}\%=2.0921$

Machine Epsilon: Let us assume that we have a decimal computer system.

We know that we would encounter round-off error when a number is represented in floating-point form. The relative round-off error due to chopping is defined by

$ e_{r}=\big\vert\frac{g_{x}\times10^{E-d}}{f_{x}\times10^{E}}\big\vert$

Here we know that $ \vert g_{x}\vert<1.0 \qquad and \qquad \vert f_{x}\vert\geq
0.1.$

% latex2html id marker 665
$ \therefore e_{r}
<\big\vert\frac{1.0\times10^{E-d}}{0.1\times10^{E}}\big\vert=10^{-d+1}$

i.e. maximum relative round-off error due to chopping is given by $ 10 ^{-d+1}$. We know that the value of 'd' i.e the length of mantissa is machine dependent. Hence the maximum relative round-off error due to chopping is also known as machine epsilon $ (\varepsilon _{chopping})$. Similarly , maximum relative round-off error due to symmetric rounding is given by

$ e_{r}
<\big\vert\frac{0.5\times10^{E-d}}{0.1\times10^{E}}\big\vert=0.5\times10^{-d+1}$

Machine-Epsilon $ (\varepsilon)$ for symmetric rounding is given by,

$ \varepsilon_{symmetric-rounding}=\frac{1}{2}\times10^{-d+1}$

It is important to note that the machine epsilon represents upper bound for the round-off error due to floating point representation.

For a computer system with binary representation the machine epsilon due to chopping and symmetric rounding are given by

$ \varepsilon_{Chopping}=2^{-d+1} \quad and \quad
\varepsilon_{symmetric-rounding}=2^{-d}$

respectively.

Eg: Assume that our binary machine has 24-bit mantissa. Then $ \varepsilon_{symmetric-rounding}=2^{-24}$. Say that our system can represent a q decimal digit mantissa.

Then, $ 2^{-24}=\frac{1}{2}\times10^{-q+1}$

i.e

$ -23log_{10}2 =-q+1$

$ q=23 log_{10}2+1\approx7.9.$

that our machine can store numbers with seven significant decimal digits.

Next: Computer Arithmetic. Up : Main Previous:Computer Representation of Numbers