In all the definition of computational complexity we assume the input string x is represented using some reasonable encoding scheme.
Input size will usually refer to the numbers of components of an instance. For example when we consider the problem of sorting the input size usually refers to the number of data items to be sorted ignoring the fact each item would take more than 1 bit to represent on a computer
But when we talk about primarily testing, i.e., to test whether a given integer n is prime or composite the simple algorithm to test for all factors from 2,3,……,
is considered exponential since the input size I(n) is
bits and the time complexity is O(), i.e., O(
).
Again if n is represented in unary the same algorithm would be considered polynomial. For number theoretic algorithms used for cryptography we usually deal with large precision numbers. So while analyzing the time complexity of the algorithm we will consider the size of the operands under binary encoding as the input size. We will analyze most of our programs estimating the number of arithmetic operations as function of input size β. While converting this complexity to the number of bit operations we have to consider the time complexities of addition, subtraction, multiplication & division.
Addition & subtraction:
Clearly addition and subtraction of two β bit numbers can be carried out using O(β) bit operations.
Multiplication:
Let X and Y be two ß bit numbers
| Let X = 2 β/2X 1+X 2 | ![]() |
| Y = 2 β/2Y1+Y2.... |
.............................................................................
Then X x Y = 2β X1 Y1 + 2 β/2 (X1 Y2+ X2 Y1) + X2 Y2
∴ Thus the time complexity of the above multiplication
T (β) = 4T (β/2) +Cβ → Time for addition
↓
4 multiplications to Compute X 1 Y1, X 1 Y2,
............................................................. X 2 Y1 & X 2 Y2
