This video is taken from the FDTD Learning Track on Ansys Innovation Courses.
As mentioned in the introduction, the finite-difference time-domain (FDTD) method is used to solve
Maxwell's equations in the time domain.
The equations are solved numerically on a discrete grid in both space and time; this
means that the electric and magnetic fields, E and H respectively, are discrete in space
The grid step in space is generally called delta x (or delta y or delta z) and in time
it is called delta t.
For convenience of notation, we often drop the delta x and delta t, and simply refer
to the spatial grid as i (or j or k) and the time step as n.
Furthermore, we often use a subscript and superscript notation for the spatial position
and time step respectively.
The E fields are solved at time n+1/2 while the H fields are solved at time n.
The derivatives in both time and space are handled with finite differences and are second
order accurate when the grid is uniform.
The electric and magnetic field components are distributed in space over a unit cell
called the Yee cell.
This staggering of the fields in space is ideal for calculating spatial derivatives
of the curl terms at the correct positions in space.
At the beginning of the simulation the E and H fields are typically 0.
From there we update E at time (n+1/2) which is a function of E at the previous timestep
plus a term proportional to the curl of H at time step n.
Once we have the E field updated to time n+1/2, we can proceed to update H at time n+1, which
is a function of H at the previous timestep and the curl of E taken at timestep n+1/2.
In this way, we can leapfrog the update of E, then H, then E and so on until we choose
to stop the simulation.
It is important to understand that we do not ever calculate E and H at the same point in
time they are offset by one half time step.
In fact, if you record the E field as a function of time with a monitor and plot it, the monitor
will interpolate the original FDTD E fields to the same time step as H so you may never
notice this offset.
Despite this issue, this leapfrog approach has the advantage of allowing us to obtain
second order accuracy in time, which means that the error between the electromagnetic
fields calculated by FDTD and the correct solution scales with the time step squared.
This means, for example, that if we reduce the time step by a factor of 2, our error
diminishes by a factor of 4.
The simulation geometry is discretized into Yee cells which, as mentioned previously,
are the fundamental unit cell of the FDTD method.
As mentioned before, the E and H field components are all located at different positions within
the Yee cell.
This allows us to calculate spatial derivatives by finite differences at the optimal spatial
locations, and gives us second order accuracy in space on a uniform mesh.
Still, it is important to understand that the E and H components are never known at
the same spatial location, which can have consequences for many types of results which
we may wish to calculate such as energy density.
By default monitors that record these fields will automatically interpolate all field components
to the corner of the Yee cell, so you can visualize and analyze your results at the
same spatial location.
However, this interpolation can be disabled and this is useful for some advanced calculations
such as determining the optical absorption in a metal near a metal/dielectric interface.
In addition to E and H, we must also discretize the electric permittivity, epsilon.
In most regions, this is very straightforward because we can assign a permittivity of either
material A or material B. However, it gets much more complicated near interfaces because
the interface can pass through a Yee cell at any position and orientation and, furthermore,
each electric field component is in a different spatial location.
As a result, we need a different value for the x, y and z components of the permittivity,
even if all materials are isotropic.
Once we have discretized the permittivity in this manner, there are still challenges.
First, the position of the interfaces is not well defined within the Yee cell.
For example, if we have a Yee cell with a spatial grid of 40nm, and we move an interface
by as much as 20nm, we may not see any difference in FDTD results because the permittivity discretization
could be identical.
Second, we do not treat the normal E field components, which are discontinuous, any differently
than the tangential ones which are continuous.
Third, we can experience staircasing effects when interfaces are on an angle with respect
to the Cartesian axes which can lead to problems such as unphysical hot spots in plasmonic
The conformal mesh technology is a method to deal with these issues by modifying the
standard FDTD update near interfaces to use an integral solution to Maxwell's equations.
This is equivalent to introducing an effective permittivity that is anisotropic and can provide
a much more accurate solution.
We will discuss the conformal mesh technology in more detail later.
FDTD simulations can be run in 3D or in 2D.
It is important to remember that for 2D simulations, the structure is infinite in the z dimension.
In some cases 2D simulations are approximation that can be run quickly, but in other cases
they can be a fully accurate solution to the problem.
For example, simulating line gratings, like the one shown here on the right, illuminated
by plane waves should be done in 2D.
Because 2D FDTD simulations make the assumption that the structure is infinite in the z dimension,
we know that the permittivity and fields are the same for all values of z.
This allows us to separate Maxwell's equations into two independent polarization states,
often called transverse electric, or TE, with fields Ex, Ey and Hz and transverse magnetic,
or TM, with fields Ez, Hx and Hy.
In FDTD Solutions we try to avoid the terms TE and TM because they can lead to a great
deal of confusion.
Instead, it is best to look at the blue arrows of sources in the FDTD design environment
to see the direction of E field polarization.
In certain cases, such as anisotropic materials with off diagonal elements, this separation
of polarizations is no longer valid.
In general, you simply set your desired source polarization and, in 2D, FDTD Solutions will
run a TE, TM or combined TE and TM simulation as required.
It is important to think about how the memory and simulation time scale with the grid size.
If we assume that the spatial grid size is uniform and that delta x is the same as delta
y and delta z, then the memory requirements for 3D scale like one over delta x cubed in
3D and one over delta x squared in 2D.
This is obviously expected but what is surprising is that the simulation time increases like
one over delta x to the fourth power in 3D, and to the third power in 2D.
The reason for this, which we will discuss in more detail later, is that you cannot reduce
the spatial grid size without also reducing the size of the time step, delta t.
This means that there is a big penalty to pay for using a smaller grid step.
For example, if you reduce your grid size by a factor of 2, your simulation time will
increase by a factor of 16.
Also, we often discuss the grid size not in absolute terms but relative to the wavelength
We call the ratio of the wavelength, lambda, to the grid size, delta x, the number of grid
points per wavelength, or sometimes just the points per wavelength.
This is a key factor that determines the accuracy of the FDTD simulation.
As a coarse rule of thumb, you can get initial FDTD results with 6 points per wavelength
and many results such as transmission and reflection will be within 10 or 20% of the
By 10 points per wavelength, many results will be within 1 to 2% of the correct result
and it is rare to require more than 20 points per wavelength.
I should note that this is a rule of thumb only and sometimes much smaller meshes are
required to resolve geometric features or to correctly simulate plasmonic effects where
high light confinement can occur.
It is also important to note that the points per wavelength should be defined with respect
to the wavelength in the medium and not the free space wavelength, therefore, for similar
accuracy a smaller grid size should be used in materials with higher refractive index.
Lumerical's FDTD Solutions provides a simple mesh accuracy setting that targets a
minimum points per wavelength in all regions of the simulation and automatically adapts
to the refractive index of the different materials.
The mesh accuracy of 1 through 8 corresponds to points per wavelength targets of 6, 10,
14, 18 and so on up to 34.
The default value is 2 and this is appropriate for most simulations.
Initially, you may want to even use a setting of 1 until everything else is set up properly.
It is rarely useful to use values of larger than 3 or 4 as there is typically something
else that limits the accuracy of the simulation at that point.
For fine geometric structures or situations, such as plasmonic structures, where you know
the light confinement will be high, the mesh can be controlled locally using mesh override
In the two figures on the lower right hand side, you can see some results of convergence
testing for a 3D CMOS image sensor application.
The first figure shows the fraction of source light that is incident on the silicon surface,
and the fraction incident under the green pixel, as a function of mesh accuracy.
You can see that good results can be obtained with a mesh accuracy setting of 1, that by
2 you have almost achieved a converged result and that there is no point in going beyond
a setting of 4.
In the second figure, you can see the cost in time and memory of the higher mesh accuracy
setting, normalized to a mesh accuracy setting of 1.
You can see that it takes almost 600 times longer to run with a mesh accuracy setting
of 8 compared to 1.
In this application, where a mesh accuracy setting of 1 can run on a single workstation
in a minute or less, this can be the difference between waiting 1 minute and waiting 10 hours
for your simulation to complete yet the results are almost identical!
Clearly, you want to be working where each simulation takes only 1 minute, which makes
it easy to perform complex parameter sweeps and optimization.