Next:Hyperbolic Equation Up:Main Previous:Solution of Tridiagonal systems:

Convergence:


The problem of convergence of a finite difference method for solving equation (1) consists of finding the condition under which

$\displaystyle u(X,T)-U(X,T),$

The difference between the exact solutions of the differential and difference equations at a fixed point $ (X,T)$, tends to zero uniformly, as the net is refined in such a way that and $ m,n
\longrightarrow\infty$, with $ mh(=X)$ and $ nk(=T)$ remaining fixed. The fixed point $ (X,T)$ is anywhere within the region $ R$ under consideration, and it is sometimes convenient in the convergence analysis to assume that $ h,k$ do not tend to zero independently but according to some relationship like

$\displaystyle k=rh^{2}$ (19)

where $ r$ is a constant.
As an example of a convergence analysis for difference formula (10), we introduce

$\displaystyle Z^{n}_{m}=u^{n}_{m}-U^{n}_{m}$

the difference between the theoretical (exact) solutions of the differential and difference equations at the grid point X=mh, T=nk. From equation (12), this satisfies the equation

If, the coefficients on the right hand side of equation (20) are all non-negative and so

 

where A depends on the upper bounds for $ \displaystyle
\frac{\partial^{ 2}u}{\partial t^{2}}$ and $ \displaystyle
\frac{\partial^{4}u}{\partial x^{4}}$ and $ Z^{(n)}$is the maximum modulus value of $ Z^{n}_{m}$ over the required range of $ m$. Thus

and so if $ Z^{(0)}=0$(the same initial data for differential and difference equations),
     
     

$ \rightarrow 0$ as $ h, k\rightarrow 0$ for fixed $ X,T$. This establishes convergence if the expression (21) is satisfied.

Stability:
The problem of stability of a finite difference scheme for solving equation (1) consists of finding conditions under which

the difference between the theoretical and numerical solutions of the difference equation, remains bounded as $ n$ increases, k remaining fixed for all $ m$. There are two methods which are commonly used for examining stability of a finite difference scheme.

Von Neumann Method:
In this method, a harmonic decomposition is made of the error Z at grid points, at a given time level, leading to the error function.

$\displaystyle E(x)=\sum_{j} A _{j} e ^{i \beta_{j} x}$

where in general the frequencies $ \vert\beta_{ j}\vert$ and $ j$ are arbitrary. It is necessary to consider only the single term $ e^{i\beta x}$ where $ \beta$ is any real number. For convenience, suppose that the time level being considered corresponds to t=0. To investigate the error propagation as t increases, it is necessary to find a solution of the finite difference equation which reduces to $ e^{i\beta x}$ when t=0. Let such a solution be

$\displaystyle e^{\alpha t} e^{i\beta x}$

where $ \alpha=\alpha(\beta)$ is, in general, complex. The original error component $ e^{i\beta x}$ will not grow with time if
$ \vert e^{\alpha k}\vert\leq 1$ for all $ \alpha$. This is Von Neumann criterion for stability. As an example, let us examine the stability of finite difference scheme (10). Since $ Z^{n}_{m}$ satisfies the original difference equation, we get

$\displaystyle Z^{n+1}_{m}=(1-2r)Z^{n}_{m}+r\left(Z^{n}_{m+1}+Z^{n}_{m-1}\right)$ (22)

Let $ Z^{n}_{m}=e^{\alpha n k} e ^{i \beta m h }= \xi^{n}e ^{i
\beta m h}$ , where . Then equation (22) gives

Cancelling $ \xi^{ n } e^{i \beta m h}$ on both sides leads to
$\displaystyle \xi$ $\displaystyle =$ $\displaystyle (1-2r) + r (e ^{i \beta h }+e ^{-i \beta h })$  
  $\displaystyle =$ $\displaystyle 1-2r(1 - cos \beta h)$  
  $\displaystyle =$ $\displaystyle 1-4 r sin^{2} \frac{\beta h}{2}$  

The quantity x is called the amplification factor. For stability, , for all values of $ \beta h$, and so

$\displaystyle -1\leq 1-4 r sin^{2} \frac{\beta h}{2}\leq 1\quad (\forall\beta h)$

The right hand side of the inequality is satisfied if $ r>0$ and the left hand side gives

$\displaystyle r\leq \frac{1}{2 sin^{ 2 }\frac{\beta h}{2}}$

leading to the stability condition $ 0<r\leq\frac{1}{2}$.

The Matrix Method:
If $ Mh=1$, the totality of difference equations connecting values of U at two neighboring time levels can be written in the matrix form

$\displaystyle AU^{n+1}=BU^{n}$ (23)

where denotes the column vector

$\displaystyle \left[U^{s}_{1},U^{s}_{2},......U^{s}_{M-1}\right]^{T}$

and A,B are square matrices of order $ (M-1)$. If the difference formula is explicit A=I. Now equation (23) can be written in the explicit form

$\displaystyle U^{n+1}=C U^{n}$

where $ C=A^{-1}B$ provided $ \vert A\vert\neq 0$. The error vector

satisfies

$\displaystyle Z^{n+1}=CZ^{n}$

from which it follow that

$\displaystyle \vert\vert Z^{n+1}\vert\vert\leq \vert\vert C \vert\vert\qquad \vert\vert Z^{n}\vert\vert$

Where $ \vert\vert.\vert\vert$ denotes a suitable norm. The necessary and sufficient condition for the stability of a finite difference scheme based on a constant time step and proceeding indefinitely in time is $ \vert\vert C\vert\vert\leq 1$. When $ C$ is symmetric, where $ \lambda _{s} (s=1,2,.....,M-1)$ are the eigen values of $ C$, and $ \vert\vert.\vert\vert _{2 }$ denotes the $ L_{2}$ norm. As an example of the matrix method for examining stability , we consider the finite difference scheme (10). Here we have,




The eigen values of this matrix are $ \displaystyle \lambda
_{s}=1-4r sin^{2}\frac{s \pi}{2N},\quad s=1,2....,\quad N-1$ and thus the method is stable if

$\displaystyle -1\leq 1-4r sin^{2}\frac{s\pi}{2N }\leq 1\qquad s=1,2,.....,N-1$

which leads to

$\displaystyle 0<r\leq\frac{1}{2},$

an identical condition obtained by the method of Von Neumann.
A difference approximation to a parabolic equation is consistent, if truncation error $ k\rightarrow 0$

as $ h, k\rightarrow 0$.

Next:Hyperbolic Equation Up:Main Previous:Solution of Tridiagonal systems: