Let us suppose that the given
data points
![$ i=0,1,2...n$](img7.png)
is coming from a
function
![$ f(x)$](img8.png)
. Let us assume that this function
![$ y=f(x)$](img9.png)
takes
the values
![$ y_{0},y_{1}.......y_{n}$](img10.png)
at
![$ x_{0},x_{1},.......x_{n}.$](img11.png)
Since there are
![$ (n+1)$](img12.png)
data points
![$ (x_{i},y_{i}),$](img6.png)
we can represent the function
![$ f(x)$](img8.png)
by a
polynomial of degree
Note: Given a set of data points
![$ (x_{i},y_{i})\qquad
i=1,...n$](img105.png)
. Suppose we are interested in evaluating
![$ f(x)$](img8.png)
at some
intermediate point
![$ x$](img2.png)
to a desired level of accuracy. Directly
using the entire data set of size n may not only be
computationally economical but may also turn out to be redundant.
Naturally one would like to use a interpolating polynomial of apt
degree. Since this is not known a priori, one may start with
![$ p_{o}(x)$](img106.png)
and if it was enough then move onto
![$ p_{1}(x)$](img107.png)
and so
on i.e. slowly increase the no. of the interpolating points (or)
data points
![$ x_{0},x_{1}..x_{k}$](img108.png)
so that
![$ p_{k-1}(x)$](img109.png)
will be
close to
![$ f(x)$](img8.png)
. In this context the biggest disadvantage with
Lagrange Interpolation is that we cannot use the work that has
already been done i.e. we cannot make use of
![$ p_{k-1}(x)$](img109.png)
while
evaluating
![$ p_{k}(x)$](img110.png)
. With the addition each new data point,
calculations have to be repeated. Newton Interpolation polynomial
overcomes this drawback.