where the operator L is linear and
The difference formulae involving two adjacent time levels are
obtained from the Taylor expansion
If we now put and
then
(2)
Now an exact formula
connecting D and
the central
difference operator in the x-direction, is
(3)
If equation (3) to eliminate D in terms of in equation (2), the exact difference replacement is given by
(4)
All difference formulae in common use
for solving equation (1) are approximations of equation (4).
Explicit Formulae
An explicit formula involves
only one grid point at the advanced time level
Consider the heat equation given by
(5)
Here and equation(2) becomes
(6)
and from equation (3), we have
(7)
Substituting this
value of in equation (6) following by expansion leads to
(8)
Where
is the mesh ratio. From
equation (8), if we retain only second order central differences,
the forward difference formula
(9)
is obtained which, on substitution for
leads to
(10)
where is an approximation to
Truncation Error:
Let us investigate the local accuracy of the finite difference
formula(10). Introduce the difference between the exact solution
of the differential and difference equations at the grid point
as
(11)
Using Taylor's theorem
and so
(12)
>From equation (5),
(10), (11) and (12), the result
is obtained. The quantity
(13)
is defined as the local truncation error of
formula (10) and the
principal part of the truncation error is
(14)
Implicit Formulae:
An implicit formula involves more than one grid point at the
advanced time level . These formulae can often be
obtained from equation (2) written in the central form
(15)
For the heat equation
the equation (15) becomes
(16)
correct to second differences
and substitution in equation (16) followed by expansion leads to
the central difference formula
(17)
with a principal truncation error of
. This is
the crank-Nicolson formula and may be written in the form
(18)
Solution of Tridiagonal systems:
The implicit
difference formula given above has involved three unknown values
of U at the advanced time level . The system of linear
algebraic equations arising from the implicit difference formulae
which must be solved at each time step is a special case of the
tridiagonal system
for
where and are known from
the boundary condition. If
and
, a
highly efficient method is known for solving the tridiagonal
system. The method is given as follows:
consider the difference relation
for
, from which it follows that
If this is used to eliminate from the original
difference formula defining the tridiagonal system, the result
is obtained, and so
If , then
, in order that the difference
relation
holds for any . The remaining
can now be computed as
.
.
.
.
.
.
.
If
then
are computed as
.
.
.
.
.
.
.
In using this method, substantial errors will appear in the
computed values of
unless
Now
and so on, since
.
This
leads to
.
Convergence:
The problem of convergence of a
finite difference method for solving equation (1) consists of
finding the condition under which
The difference between the exact solutions of the differential and
difference equations at a fixed point , tends to zero
uniformly, as the net is refined in such a uniformly, as the net
is refined in such a way that
and
, with and remaining
fixed. The fixed point is anywhere within the region
under consideration, and it is sometimes convenient in the
convergence analysis to assume that do not tend to zero
independently but according to zone relationship like
(19)
where is a constant.
As an example of a convergence analysis for difference formula
(10), we introduce
the difference between the theoretical (exact) solutions of the
differential and difference equations at the grid point X=mh,
T=nk. From equation (12), this satisfies the equation
(20)
If
(21)
the coefficients on the right hand side of equation (20) are all
non-negative and so
where A depends on the upper bounds for
and
and is the maximum
modulus value of over the
required range of . Thus
and so if (the same initial data for differential and
difference equations),
as
for fixed . This
establishes convergence if the expression
(21) is satisfied.
Stability:
The problem of stability of a finite difference scheme for solving
equation (1) consists of finding conditions under which
the difference between the theoretical and numerical solutions of
the difference equation, remains bounded as increases, K
remaining fixed for all . There are two methods which are
commonly used for examining stability of a finite difference
scheme.
The Von Neumann Method:
In this method, a
harmonic decomposition is made of the error Z at grid points, at a
given time level, leading to the error function.
where in general the frequencies
and are
arbitrary. It is necessary to consider only the single term
where is any real number. For convenience,
suppose that the time level being considered corresponds to t=0.
To investigate the error propagation as t increases, it is
necessary to find a solution of the finite difference equation
which reduces to
when t=0. Let such a solution be
where
is, in general, complex. The original
error component
will not grow with time if
for all . This is Von Neumann
criterion for stability. As an example, let us examine the
stability of finite difference scheme (10). Since
satisfies the original difference equation, we get
(22)
Let
, where
. Then equation (22) gives
Cancelling
on both sides leads to
The quantity is called the
amplification factor. For stability,
, for
all values of , and so
The right hand side of the inequality is satisfied if and
the left hand side gives
leading to the stability condition
.
The Matrix Method:
If , the totality of difference equation connecting values
of at two neighboring time levels can be written in the matrix
form
(23)
where
denotes the column
vector
and A,B are square matrices of order . If the difference
formula is explicit A=I. Now equation (23) can be written in the
explicit form
where provided . The error vector
satisfies
from which it follow that
Where denotes a suitable norm. The necessary and
sufficient condition for the stability of a finite difference
scheme based on a constant time step and proceeding indefinitely
in time is
. When is symmetric,
where
are the
eigen values of , and
denotes the norm. As
an example of the matrix method for examining stability , we
consider the finite difference scheme (10). Here we have,
The eigen values of this matrix are
and
thus the method is stable if
which leads to
an identical condition obtained by the method of Von Neumann.
A
difference approximation to a parabolic equation is consistent, if
truncation error
as
.
Hyperbolic Equation in one space variable: The
simplest hyperbolic problem is that of the vibrating string
(24)
in the domain
satisfying the
following initial conditions
and boundary
conditions
we place a mesh of points
on R, where
The exact difference replacement of (24) at the nodal points
is given by
(27)
where is mesh ratio and
The
explicit and implicit difference scheme for (24) will be obtained
by approximating equation (27).
Explicit Difference Schemes:
An explicit difference scheme for (24) is given by
which may be written in the form
(28)
where is the approximation to
.
If each term in (28) is expanded in Taylor's series about the
nodal point
and the function
satisfies (24), then we fit the function error
For p=1, the truncation error vanishes and so the difference
representation of (24) is obtained as
In order to start computation, we require data on the two lines
and . The first condition in (25) gives on
the initial line as
We can use the second condition in (25) to find values on the line
. Using the central difference approximation for the
derivative, i.e
In the second condition in (25) and eliminating
from
(28) for n=0, we get the formula to give the values on the first
time level.
The boundary conditions (26) become
and
Formula (27) may now be used to advance computation for
.
To examine the finite difference formula (27) for stability, we
replace by
and get
or
(29)
where
The solution of (29) is given by
and
This gives that
.
Thus for stability
or
, which gives .
This is the condition of stability.
Implicit Difference Scheme:
In order to improve stability, we now consider an implicit
difference replacement of equation (25). This takes the form
When
, one can examine this scheme for
stability and find that the scheme is stable for all if
. Thus, if
, the implicit
difference formula
The consistency of a finite difference approximation to a
hyperbolic equation can be defined briefly. A difference scheme to
a hyperbolic equation is consistent; if
as
.
Elliptic equations in two dimensions:
Suppose that R is a bounded region in the
plane
with boundary
. The equation
(30)
is said to be elliptic in R if
for all
points
in R. Three distinct problems involving
equation (30) arise depending on the boundary conditions
prescribed on
The first boundary value problem or Dirichlet
problem, requires a solution u of equation (30) which takes on
prescribed values
(31)
on the boundary
The second boundary value problem, or
Neumann problem, where
(32)
on the boundary
. Here
refers to derivative along the normal
to
directed array from the interior of .
The third boundary value problem, or Robbins
problem, with
(33)
on
where
for
Before developing finite difference methods of solving elliptic
equations, a most useful analytical tool in the study of elliptic
partial differential equation will be introduced. This is the
Maximum Principle which will be stated for the linear
elliptic equation.
where
a,b,c,d and e are functions of the independent variables
. It is clear that in this case any constant
represents a solution of the equation. The maximum
principle states that the constants are the only solutions which
can assume a maximum or minimum value in the interior of the
bounded region R. Alternatively, it states that every solution of
the elliptic equation achieves its maximum and minimum values on
the boundary
of R.
Laplace's equation in a square:
We consider Laplace's equation
(34)
subject to
on the boundary of the unit square
. The square region is covered by a grid
with sides parallel to the coordinate axes and the grid spacing is
. If , the number of internal grid points or nodes is
. The coordinates of a typical internal grid point are
, ( l and m integers ) and the value of
at this grid point is denoted by . Using Taylor's
Theorem, we obtain
and
and after addition
Similarly.
and so
leading to the five point finite
difference scheme
(35)
for the Laplace's equation, with a local truncation error
where denotes the function satisfying the difference
equation at the grid point
The totality of equations at the interval grid points
of the unit square leads to the matrix equation
(36)
where is a matrix of order given by
with the unit matrix of
order and a matrix of order given by
The vector and are given by
and
respectively, where denotes the transpose. The elements
of the vector U constitute the unknowns
and the elements of the vector K depend on the boundary
values
at the grid points on the perimeter of the
unit square. Because of the large number of zero element in the
matrix A, iterative methods are often used to solve the system
(36).
:.. :Difference Notation :Differentiation Formulas: