Next: Computer Representation of numbers and ComputerArithmetic . Up: Main Previous:Main

Approximations and Round-off Errors

Approximations and errors are integral part of numerical methods. Prior to using the numerical methods it is essential to know how errors arise, how they grow during the numerical computations and how they affect the accuracy of a solution.Errors can come in a variety of forms and sizes. To get a quick feel let us look at the following taxonomy of errors:

Further discussion will be focussed on errors due to computing machine and those due to numerical method. Firstly the notion of significant digits will be introduced.

Significant Digits

Usually , the numerical solution to a given problem is sought to a desired level of accuracy and precision wherein the error is below a set tolerance level.The idea of significant numbers is essential to understand the concept of accuracy and precision in the solution and also to designate the reliability of a numerical value.

The Significant Digits of a number are those that can be used with confidence. Suppose we seek a numerical solution to an accuracy of and obtain as solution $y=23.40657231$. Here the solution is reliable only up to the first three decimal places i.e $y=23.406$ or the solution has five significant digits $ \underline{{2}\quad {3}\quad{4} \quad {0 }\quad
{6}}$. Some numbers like ,, $\sqrt{7}$ etc. have infinite number of significant digits. For example consider $\pi, $ ,

=

Such numbers can never be represented exactly on a computer which operates with fixed number of significant digits due to hardware limitations.The omission of certain digits from such numbers results in what is called round-off-error. Some thumb rules on the significant digits , within the desired level of accuracy are :

(a) All non-zero digits are significant ,

(b)All zeros occurring between non-zero digits are significant,

(c)Trailing zeros following a decimal point are significant.

(e.g , , have three significant digits),

(d) Zeros between the decimal point and preceding a non-zero digit are not significant. For example $ 0.0002341 ( 2341\times
10^{-7})$, $ 0.002341 ( 2341\times
10^{-6})$,$ 0.02341 ( 2341\times
10^{-5})$, have four significant digits.

(e) Trailing zeros in large numbers without the decimal point are not significant. For instance may be written in scientific notation as $ 54 \times 10^{3} $ and contains only two significant digits.

The concept of accuracy and precision are closely related to significant digits as follows: Accuracy refers to the number of significant digits in a value. For example the number is accurate to five significant digits: Precision refers to the number of decimal positions i.e the order of magnitude of the last digit in a value. The number has a precision of or .

Next: Computer Representation of numbers and ComputerArithmetic Up:Main Previous:Main