Good Morning! So welcome to the new lecture
series on Computational Electromagnetics and
its Applications So this particular topic
is quite interesting for people working in
industries or in Academia mostly focusing
on modeling and also modeling related to experiments
Computational Electromagnetics has been a
topic that is gaining advantage and attention
since the last 50 years owing to various changes
in computer architecture availability of fast
computers and also numerical solvers that
are able to moral quite accurately quite complex
problems
So in this particular module series we will
focus on finite difference methods
So we will see what is going to be our motivation
for this particular lecture And we will follow
the order as it is given here Following the
motivation we will look into the background
and background of particularly the finite
differencing method And we will introduce
certain finite differencing schemesSo let
us go directly to the motivation
The motivation for Numerical Methods is we
cannot use analytical methods quite straight
forwardly for most of the practical problems
because practical problems has always certain
sense of non linearity So analytical methods
fail when the partial differential equations
are not linear
We see that linearizing create serious errors
In other words inaccuracy in the solution
space So that is the first motivation for
going into the non analytical or any kind
of numerical methods
The second thing is what happens when computational
domain is complex what I mean by that is for
example here let us say we are talking about
a spiral antenna which has various spirals
several arms of spirals metallic with certain
properties let us say conductivity permittivity
and permeability And then backed by certain
other materials which could be dielectric
medium or whatsoever And then has also metallic
surfaces on the side So this is quite a complex
structure for us to analytically model So
Analytical method also fails when the domain
becomes complex
And also when we have boundary conditions
let us say we have a set of boundary conditions
defined on the gamma 1 which is equal to u
of 0 let us say this is a very hard boundary
condition And then other part of the boundary
of the domain let us say gamma 2 has certain
flux normal component of the flux define so
this is called as the mixed boundary condition
So when we have two three different types
of boundaries also in those conditions it
is very difficult to use if not impossible
to use analytical methods Likewise for example
when we have boundary conditions which are
also time dependent let us say this is in
the previous case we had straight forward
as a constant
Whereas in the other case when we have instead
of constant if we have a time variation also
on the boundaries So then also the analytical
methods become quite difficult For example
here we have an aluminium plate into you know
various units and then we see that the four
corners are having different boundary conditions
which are dependent on time
The last but not the least motivation is when
you are talking about inhomogeneous and also
anisotropic medium For example in this case
when you are trying to model a medium which
is anisotropic in a particular direction and
then isotropic elsewhere so what do you see
is it's very difficult to model such mediums
accurately using analytical methods
So with this we see that what we have in the
case of finite difference method or any numerical
method is it gives us quite a bit of flexibility
and also face out the disadvantages or the
lack of flexibility of the analytical methods
for modeling more accurately complex problems
It had been said I am going to go into one
of the most basic methods which is called
as the Finite difference method Without specifying
whether frequency domain or a time domain
we will just look at the special discrimination
for the time being
So the method itself was introduced in 1920s
by Thorn And he actually named the method
the method of squares And it was mainly used
for nonlinear hydrodynamics equation Because
they found out that it's very difficult to
use classical methods like we talked before
So he invented a new method which he called
the method of squares to model nonlinear hydrodynamic
problem that is the background
But in Electromagnetics the scheme itself
was broaden by Ken Yee So Ken Yee introduced
the method in Electrodynamics as you can see
for Maxwell equation using two staggered partition
grids So what I mean by staggered partition
grid will become more clear later on but the
pictorial representation here gives us a little
bit understanding There are two grids one
is green grid the other one is the brown grid
and they are staggered in space and infact
they are also staggered in time This is something
we will see later on but this is the most
important point So what I want to do also
give a little bit background on this methods
I mentioned that algorithm itself was introduced
in 1966 by E but the method itself did not
gain attention for almost a decade Nobody
really bothered about using the method for
almost a decade
So that being said what was the other methods
or what was keeping numerical scientist or
computational scientist busy They were more
busy with a well established method called
as method of moments Which we will see later
on It was much more evolved it had much more
numerical and mathematical tools involved
in it so people were basically using method
of moments They did not pay attention to find
a different time domain or find a different
method in general for almost a decade There
were other problems also related to finite
difference method The method itself works
fine but somehow if you wanted to do any practical
problems you need to define the method along
with the boundary conditions if not you are
going to re assimilate a very large problem
although what you are interested is very small
area of computational domain
So if for example if I wanted to simulate
scattering problem let us say I want to understand
what is a scattering of a particular object
let us say I have a car and I am talking about
electromagnetic waves scattered by the car
So what I would do is I would model the car
surrounded by certain atmosphere maybe I am
using a standard air free space but since
I do not have a proper termination so I have
to simulate the car surrounded by a very very
big volume and although I am only interested
is what is going on around the car I need
to simulate a very big problem because I did
not have very accurate and stable boundary
conditions So that was one of the biggest
problem of this method for a long long time
that changed over a period So what happened
was in 1975 it was both Taflove and Broadwyn
they brought certain improvement to the stability
of this method they basically computed the
stability criteria and they also improved
the methods functioning for a steady state
solution for a sinusoidal input and also in
1977 Holand kuns and Lee applied this method
for a broadband application They basically
send in a pulse and simulated it for a broadband
But all these things are only still working
on the method itself And later on someone
called as Murr came and did something called
as absorbing boundary condition
Let me explain this in a slide Let us say
this is a scatterer and then the scattering
is happening so there was a possibility to
truncate the entire problem using a certain
absorbing boundary conditions So we will not
focus too much on the absorbing boundary condition
right now we will talk about it later on but
its important to know that the computational
domain let us say we call it as omega it is
being truncated using certain conditions here
at the boundary and this is what we call it
as ABC : Absorbing Boundary Conditions And
some called as Beroje a French Engineer working
for the electricity corporation of France
He broadens an idea which revolutionized and
even popularized this method further called
as perfectly matched layer
We will talk about this also during this course
what is the meaning of perfectly matched layer
and how does it work But right now for the
motivation its enough to know instead of putting
one single boundary what Beroje did was quite
revolutionary We will see why it is revolutionary
later on So he put a truncation layer and
this is the perfectly matched layer
So we will discuss all these terms while we
go forward But I wanted to give you a little
bit on the historic front of the development
of this method So it is quite clear there
were quite a lot of other things that were
required for a method to be popularized or
widely applied This is not just the method
itself but also the tools that are required
around the method It could be on the stability
conditions it could be on the requirement
of certain truncation techniques or perfectly
match layer so on and so forth
So with this we will start looking at basically
what is the meaning of this finite difference
method We will look at it what are the structure
of it in the next slides
So the finite difference method itself is
an algebraic form Let us explain this a little
bit further let us say I am interested in
finding out the value of certain differential
on a particular grid space
Let us say I have a grid space and I have
to compute the value of my function on the
grid space So let me say I am interested in
finding out what will be the functions first
derivative on each of these points At Point
number 1 this is the derivative Point number
2 this will be the derivative and so on and
so on and so forth So what is happening here
is if this is x and this is f of x what I
am doing is I am computing the value of f
of x and each of these points and then I am
differentiating
What I am interested is I am interested in
the different by dx this is what I am interested
in So if these are the two axis and umm the
X axis is the independent variable and the
Y axis is the dependent variable I am computing
the df by dx in each point Algebraically this
is equivalent to say I am writing down this
as a function of certain weights
This will become clear later on but what will
essentially happen is if I have a function
df by dx let us say I am interested in first
differential second differential so on and
so forth And this is basically given as a
sum of certain weights which I call it here
C I F K times the function itself define in
those points So I goes from 0 to n if I equal
to 0 we are talking about only one thing and
so on and so forth
So in this case what is happening is there
are certain weights I am multiplying on this
so you can basically write this as value which
is containing f of x i and then there is certain
weight functions c of k will be the value
of d to the power of k of f by dx to the power
of k So this essentially algebraic equation
where the weight functions are multiplied
for those value so that is what we mean by
saying the finite difference approximations
are algebraic in nature
Right now if you are not able to understand
this it is totally fine we will explain this
step by step later on Right now what you need
to know is the finite difference approximations
are basically algebraic They are in the form
an algebraic equation
The second thing is as you can see here the
value at a point depends on values at some
neighboring points What I mean that by this
is I know the value of these points given
in black I am interested in knowing a value
let us say at this point so the value at this
point can be found out the using the values
at these neighboring points So we are talking
only in terms of special derivative here special
aspect here but if you also take the time
axis into play so you can compute the value
of certain points in space and time using
the values of certain neighboring points in
space and time as well So the entire process
of finite difference goes through 3 simple
steps So when I say three simple steps what
are those steps That what we are going to
say Let us say we take simple one dimensional
example to illustrate the logic and we can
expand it to multi dimensions
Let us say we have a domain which is a one
dimensional domain given by this line what
we are essentially doing here is we need to
divide this solution domain into grids of
nodes Say for example 3 nodal points Of course
when you do numerical methods your domain
will be much larger and your number of nodes
will be also much larger here we are illustrating
it only for the sake of simplicity with three
points
And then the second step is you are trying
to approximate the differentials what I mean
by differentials is the differential equations
by certain difference equation For example
in this case this f prime at x equal to x
not is nothing but first derivative of f with
respect to x at x not is equal to certain
value that we are computing for f at x not
plus delta x minus f x not divided by the
step size in x So the step size in x is the
value here In other words we can write this
in a much more simple way as follows:
So let us say this is the grid and what we
are having here is this is the size of delta
x So what we are doing next is we are trying
to go forward with certain difference equation
in order to approximate the value of the differential
at certain point Likewise as you can see we
have to have certain boundary conditions and
initial conditions BCs are the boundary conditions
and ICs are initial conditions
So when we say boundary conditions we are
interested in the x coordinate of the boundary
Since it is a one time problem what we are
talking about is let us say x equal to 0 here
and x equal to 1 So we are skipping certain
values of u at x equal to 0 here and x equal
to 1 as 0 for all time variables Similarly
we are talking about certain initial conditions
if it is a static problem then we are not
interested in the time variables but in the
case of the time varying problem we set the
initial conditions as time equal to 0 certain
value So this value is a t equal to 0
As you can see the number of initial condition
also changes depending on the order of the
problem itself that we will see in the next
slides so what you need to know is when we
have a problem the solution process for finite
difference method is you create a step 1 and
then you do the following thing you divide
the domain into certain nodal points and then
in the step 2 you approximate the differentials
using certain difference equations and then
in the step 3 you use certain given boundary
conditions and initial conditions so that
you can compute the value of the problem space
or the solution space serially in time
So let us look now at certain fundamental
form of finite differencing So let us say
we have a problem where the solution is f
of x and the value of the x coordinate is
given we have a step size of delta x this
is x not a step before is x not minus delta
x and the step one forward is x not plus delta
x
So let us say we are interested in knowing
what is the value of certain differentials
based on this f of x So let us say we are
interested in knowing the value of the first
differential of f of x at the point x not
So we can do this in different ways
The first way to do that is we take the forward
differencing What we mean by forward differencing
is we say at x not the value of f dash that
is the first differential at x equal to x
not is going to only depend on the value that
is one step in the forward direction from
x not and x not itself
So it does not really matter what the value
is here And the value is given by this equation
where we are taking the value at d minus the
value at p divided by the step size itself
So this is a very simple forward differencing
scheme
Likewise we can do the same thing with backward
differencing scheme So here we say that the
value at x not the first differential of f
at x not with respect to x is going to depend
only on the value at x not minus delta x and
x not itself It does not matter what the value
is here it is going to depend only on this
So as you can see both these methods whether
you are doing a forward differencing or backward
differencing there is a kind of a bias The
bias is it is saying what ever going to be
the value of differential at one point is
going to be only dependent on what is in the
forward direction or in the backward direction
So a safer bet would be is to take the value
at both x not minus delta x and x not plus
delta x and that is exactly what we do in
the scheme of central differencing as you
can see hereSo what we are saying is instead
of going and doing the differencing either
only focusing on x not plus delta x or only
focusing on x not minus delta x let us take
both of them That is what we are exactly doing
here We are saying the value of the first
differential at x not is going to depend on
both these values at x not minus delta x and
x not plus delta x and of course here would
be twice the delta x delta x here plus delta
x here And this is the central differencing
method
So let us summarize all of them in one slide
so we have the forward differencing scheme
which is focusing only on forward value the
backward differencing scheme which is focusing
mainly on the backward value and the central
differencing which is taking both to delta
x
So with that being said we can stop here and
we can summarize what we have seen So we have
seen now the motivation for going into the
finite differencing method why we are approaching
finite difference scheme as compared to analytical
method and we also gave you the historical
background on the method itself And we have
introduced some very basic notions of differencing
Of course we have to develop them further
if we wanted to model any meaningful problems
for applications in electromagnetics So with
that being said we will come back again and
we will focus on the further techniques in
finite differencing Thank you