Next: Computer Representation of Numbers. : Up :Main Previous: Computer Representation of numbers and Computer Arithmetic

Computer Representation of Numbers

Computers are designed to use binary digits to represent numbers and other information. The computer memory is organized into strings of bits called words of same length. Decimal numbers are first converted into their binary equivalents and then are represented in either integer or floating point form.

Integer Representation

The largest decimal number that can be represented , in binary form , in a computer depends on its word length. An n-bit word computer can handle a number as large as . For instance a 16-bit word machine can represent numbers as large as . How do we represent negative numbers ? Negative numbers are stored using $ 2's$ complement. This is obtained by taking the $ 1's$ complement of the binary representation of the positive number and then adding to it.

For example let us represent $ -17$ in the binary form.

$\displaystyle (17)_{10}$ $\displaystyle =$ $\displaystyle (10001)_{2}$  
$\displaystyle 17$ $\displaystyle =$ $\displaystyle 010001$  
$\displaystyle =$ $\displaystyle 101110$  
$\displaystyle \qquad$ $\displaystyle +$ $\displaystyle 000001$  
    $\displaystyle \line(1,0){30}$  
$\displaystyle =$  
    $\displaystyle \line(1,0){30}$  
% latex2html id marker 246
$\displaystyle \therefore \quad -17$ $\displaystyle =$ $\displaystyle 101111$  

Here in an extra zero to the left of the binary number is appended to indicate that it is positive. If this extra leftmost binary digit is set to then it indicates that the binary number is negative. So the general convention for storing signed numbers is to append a binary digit 0 or to the left of the binary number depending on the positive or negative sign of the number. So in a n-bit word computer, as one bit is reserved for sign , one can use maximum up to $ (n-1)$ bits to store a signed number. So the largest signed number a 16-bit word can represent is $ 2^{n-1}-1= 32767 $. On this machine since zero is defined as it is redundant to use the number to define a "minus zero". It is usually employed to represent an additional negative number i.e and hence the range of signed numbers that can be represented on a 16-bit word machine is from to .

Floating Point Representation

Fractional numbers such as and large numbers like which fall outside the range of a d-bit word machine , say for instance 16-bit word machine are stored and processed in Exponential form. In exponential form these numbers have an embedded decimal point and are called floating point numbers or real numbers. The floating point representation of a real number is where $ 'M'$ is called mantissa and $ 'E'$ is the exponent. So the floating - point representation of the fractional number is and that of the large number is $ 0.987654321\times10^{2}$.

Typically computers use a 32-bit representation for a floating point. The left most bit is reserved for the sign. The next seven bits are reserved for exponent and the last twenty four bits are used for mantissa.

The shifting of the decimal point to the left of the most significant digit is called normalization and the numbers represented in the normalized form are known as normalized floating point numbers.

For example , the normalized floating point form of the numbers , , are:

0.00695 = $ 0.695\times10^{-2}$ = .695E-2
56.2547 = $ 0.562547\times10^{2}$ = .562547E2
-684.6 = $ -0.6846\times10^{3}$ = -.6846E3

Inherent Errors

Inherent errors arise due to the data errors or due to the conversion errors.

Data Errors

If the data supplied for a problem is obtained from some experiment or from some measurement then it is prone to errors due to the limitations in instrumentation or reading. Such errors are also referred to as empirical errors. So when the data supplied is correct , say to two decimals there is no use performing arithmetic accurate to four decimals!

Conversion Errors

Conversion errors arise due to the limitation on the number of the bits used for representing numbers both under integer and floating point representation. So it is also called as representation error. The digits that are not retained constitute the round-off error.

For example consider the case of representing a decimal number in a computer. The binary equivalent of $ (0.1)_{10}$ has a non-terminating form like $ 0.000110011\overline{0011}$ ...... but the computer has limited number of bits. If we add ten such numbers in a computer the result will not be exactly due to the round -off error during the conversion of to binary form.

Next: Computer Representation of Numbers. : Up :Main Previous: Computer Representation of numbers and Computer Arithmetic