This article provide details on running your simulation using the Linux command line or terminal.
Using the design environment (CAD/GUI)
-
FDTD
fdtd-solutions [options]
- MODE
mode-solutions [options]
- CHARGE, HEAT, DGTD and FEEM
device [options]
- INTERCONNECT
interconnect [options]
Options
filename
- optional, the filename Opens the specified simulation or project file.
-v
- optional, Outputs the product version number.
scriptFile.lsf
- optional, the script filename Opens the specified script file.
-safe-mode
- optional, Turn on the safe mode.
-trust-script
- optional, Turn off the safe mode.
-run <scriptfile>
../[[verpath]]/fdtd-solutions.exe -run <scriptfile> <simulationfile>
- optional, Run the specified script file.
- if a simulation file is required, this is added after the script file.
-nw
-hide
- optional, (-nw for FDTD only), -hide for other solvers.
- This hides the CAD window from appearing on the Desktop.
Notes: -nw and -hide command options
|
-use-solve
- optional, used to run the simulation in non-interactive engine mode.
- see this page for details.
-logall
- optional, Generates a log file for each running process of the simulation job.
-exit
- optional, Exit application after running the script file.
-o
- Change the location that log files are saved to.
- All log files will be saved to the relative or absolute directory passed to -o.
- If the directory ends with .log this will be treated as a file name.
Useful when running INTERCONNECT with the -logall option.
Examples
Opening FDTD with a specific simulation project
$ /opt/lumerical/[[verpath]]/bin/fdtd-solutions simulationfile.fsp
Running a script with a simulation file while 'hiding' the CAD window and disabling safe mode.
$ /opt/lumerical/[[verpath]]/bin/fdtd-solutions -nw -trust-script -run scriptfile.lsf simulationfile.fsp
Run simulations without MPI
- FDTD
fdtd-engine-mpich2nem [options]
-
FDE
fd-engine [options]
-
EME
eme-engine-mpich2nem [options]
-
varFDTD
varfdtd-engine-mpich2nem [options]
- CHARGE
device-engine [options]
- HEAT
thermal-engine [options]
- DGTD
dgtd-engine [options]
- FEEM
feem-engine [options]
- MQW
mqw-engine [options]
Options
filename
- required, The name of the simulation or project file to run.
-t
- optional, Controls the number of threads used. If not used or left blank, it will use 1 thread/processor.
-v
- optional, Outputs the product version number.
-fullinfo
- optional, It will print more detailed time benchmarking information to the log file based on walltime and cpu time measurements.
-log-stdout
- optional, Redirects the log file data to the standard output, rather than saving it to file.
- This option will be ignored when the simulation runs in graphical mode.
-mesh-only
- optional, Mesh the geometry without running the simulation.
-inmaterialfile <file>
- optional, Load simulation mesh data from a <file>.
-outmaterialfile <file>
- optional, Save simulation mesh data to <file> for use on another project.
-logall
- optional, Create a log file for each simulation or sweep.
- Logfiles are named filename_p0.log, filename_p1.log, filename_p2.log
- By default, only filename_p0.log is created.
-mr
- optional, Print a simple memory usage report for a given simulation file to the standard output. Output can be piped or saved as a text file.*
-o
- optional, Change the location that log files are saved to.
- All log files will be saved to the relative or absolute directory passed to -o.
- If the directory ends with .log the last section will be treated as a file name.
-resume
- optional, available for FDTD simulations only.
- Resumes the simulation from the last check point.
- If no check point is found it will start the simulation job from the beginning.
- Enable the simulation checkpoint feature in the "Advanced Options" of the FDTD Solver object.
Examples
Running on the local computer with the -resume flag when check point is enabled in FDTD.
/opt/lumerical/[[verpath]]/bin/fdtd-engine-mpich2nem -t 8 -resume /path/simulationfile.fsp
Run with 4 threads and saving the log file into a different path.
/opt/lumerical/[[verpath]]/bin/fdtd-engine-mpich2nem -t 4 $HOME/temp/filename.fsp -o "~/Documents/logfiles/"
Run an EME simulation using 2 threads.
/opt/lumerical/[[verpath]]/bin/eme-engine-mpich2nem -t 2 -logall $HOME/temp/example.lms
Running simulations via the MPI
Using MPI to run the simulation job with the solver is done for the following used cases:
- Run several simulations at the same time on different machine or nodes. (Concurrent computing)
- Using several machine to run 1 single simulation to take advantage of their memory (RAM) as required by the simulation. (Distributed computing)
- Launch and run a simulation from a local machine to a remote machine or node.
MPI is a complex application with many configuration options and versions. On Linux OS, Lumerical supports a wide variety of MPI versions, with MPICH2 Nemesis being the default.
General MPI Syntax
mpiexec [mpi_options] solver [solver_options]
MPI Options
-n <#>
- FDTD, varFDTD and EME specify the <#> number of mpi processes
-hosts <hostlist>
- FDTD, varFDTD and EME or send job across multiple computers.
-hosts <hostfile>
- Overrides the '-n' option.
Where - hostlist : comma separated list of hosts or IP with corresponding number of processes
- hostfile : text file with 1 hostname/IP per line, with corresponding number of processes separated by a comma ':'
-nice -n19
- all solvers, specifies the process priority for load balancing.
For additional information on MPI options, consult the MPI product documentation for further details:
"/opt/lumerical/mpich2/bin/mpiexec" -help
Stopping your simulation (Quit and Save)
To stop your simulation job, similar to 'Quit and save' on the CAD.
- When on the active terminal window where you simulation job is running,
# use the keyboard keys
CTRL + C - If the engine process is running in the background, find the process ID <PID> and kill the process,
# using pgrep to show the list of PID for "fdtd-engine"
pgrep fdtd-engine
# from the list kill 1 of the PID
kill <PID>
Supported MPI Variants
It is necessary to use the version of the solver that is matched to the version of MPI being used to run the solver. The following table lists the available options.
MPICH2 (nemesis)
Default MPICH2 libraries included with Lumerical.
- fdtd-engine-mpich2nem
- varfdtd-engine-mpich2nem
- fd-engine
- eme-engine-mpich2nem
- device-engine-mpich2nem
- thermal-engine-mpich2nem
- dgtd-engine-mpich2nem
- feem-engine-mpich2nem
- mqw-engine-mpich2nem
Intel MPI
Any parallel system with Intel MPI libraries installed.
- fdtd-engine-impi-lcl
- varfdtd-engine-impi-lcl
OpenMPI
Clusters using TCP, Myrinet or OpenFabrics hardware with OpenMPI library installed.
- fdtd-engine-ompi-lcl
- varfdtd-engine-ompi-lcl
MPICH / MPICH2
Any parallel system with a variant of the MPICH2 libraries installed. (not supplied by our installer)
- fdtd-engine-mpich2-lcl
- varfdtd-engine-mpich2-lcl
Using Intel MPI
Starting with the 2021 R2.1 release you can run your simulation using Intel MPI 2019 version as your "Custom" Job launching preset in the advanced resource configuration on supported RHEL/CentOS. This applies to FDTD and varFDTD using the 'fdtd-engine-impi-lcl' and 'varfdtd-engine-impi-lcl' executable.
- Install Intel MPI using yum on supported RHEL/CentOS:
sudo yum-config-manager --add-repo https://yum.repos.intel.com/mpi/setup/intel-mpi.repo sudo rpm --import https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB sudo yum -y install intel-mpi-rt-2019.9-304.x86_64
- Copy "libfabric.so.1" and "libmpi.so.12" to the Lumerical "lib" installation folder:
sudo cp /opt/intel/impi/2019.9.304/intel64/libfabric/lib/libfabric.so.1 /opt/lumerical/[[verpath]]/lib/ sudo cp /opt/intel/impi/2019.9.304/intel64/lib/release/libmpi.so.12 /opt/lumerical/[[verpath]]/lib/
Examples
Shown using the default install path on supported Linux systems.
- Run a simulation on the local computer with 4 processes (-n 4) and the -resume flag when 'checkpoint' is enabled in FDTD using the bundled MPICH2 Nemesis.
/opt/lumerical/[[verpath]]/mpich2/nemesis/bin/mpiexec -n 4 /opt/lumerical/[[verpath]]/bin/fdtd-engine-mpich2nem -t 1 /path/to/simulationfile.fsp
$ /opt/lumerical/[[verpath]]/mpich2/nemesis/bin/mpiexec -n 4 /opt/lumerical/[[verpath]]/bin/fdtd-engine-mpich2nem -t 1 -resume /path/to/simulationfile.fsp
- Run FDTD using Intel MPI with 4 processes.
$ /opt/intel/impi/2019.9.304/intel64/bin/mpirun -n 4 /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -t 1 simulation.fsp
- Using OpenMPI and a different variant of MPICH to run varFDTD simulation
$ /<install_path_OpenMPI>/mpiexec.exe -n 12 /opt/lumerical/[[verpath]]/bin/varfdtd-engine-ompi-lcl -t 1 simulation.fsp
$ /<install_path_MPI>/mpiexec.exe -n 4 /opt/lumerical/[[verpath]]/bin/varfdtd-engine-mpich2-lcl -t 1 simulation.fsp
- Run a simulation distributed between 3 computers with Intel MPI, with different number of processes on each machine (4, 8, and 16) using 1 thread and creating a log file for each process. Note that the total number of processes is indicated with the "-n #" (-n 28).
$ /opt/intel/impi/2019.9.304/intel64/bin/mpirun -n 28 -hosts node01:4,hosts02:8,node03:16 /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -logall -t 1 simulationfile.fsp
Note: Use the IP address of the node instead of the HostName, in case it is not able to resolve the host names.
Pipe standard output to a text file
- The standard output does not appear in the terminal window. In order to see the report you can simply pipe the output to a text file using the piping command ">".
- For example, to output the engine version number or memory usage report to a file, use the following syntax.
/opt/lumerical/[[verpath]]/bin/device-engine -v > $HOME/temp/version.txt
/opt/lumerical/[[verpath]]/bin/dgtd-engine -mr $HOME/temp/example.ldev > $HOME/temp/example_mem_usage.txt
CPi - MPI test program
This test application allows users to ensure that MPI is properly configured, without the additional complication of running any Lumerical solver.
For example, this avoids any potential problems with product licensing, since both MPI and CPI are not licensed features.
MPICH2 Nemesis
Run CPi using 4 processes on the local computer.
/opt/lumerical/fdtd/mpich2/nemesis/bin/mpiexec -n 4 /opt/lumerical/fdtd/mpitest/cpi-mpich2nem
The output of the CPI test should look something like this:
Process 2 on localhost
Process 1 on localhost
Process 3 on localhost
Process 0 on localhost
pi is approximately 3.1416009869231249, Error is 0.0000083333333318
wall clock time = 0.000049
Run CPi distributed between two computers. The -hosts option is used to specify the computer names. The syntax is:
-hosts hostName1:processesOnHost1, hostName2:processesOnHost2
Example:
/opt/lumerical/[[verpath]]/mpich2/nemesis/bin/mpiexec -hosts <node1_IP>:4,<node2_IP>:4 /opt/lumerical/mode/mpitest/cpi-mpich2nem
Note: The computer's IP address is used instead of the HostName. There are times that MPICH2 might not be able to resolve the HostName.
The output of the CPI test should look something like this:
Process 0 on node1
Process 5 on node2
Process 7 on node2
Process 2 on node1
Process 1 on node1
Process 3 on node1
Process 4 on node2
Process 6 on node2
pi is approximately 3.1416009869231249, Error is 0.0000083333333318
wall clock time = 0.000000