This article provide details on running the simulation from the Windows command prompt.
Using the design environment (CAD/GUI)
-
FDTD
fdtd-solutions [options]
-
MODE
mode-solutions [options]
-
CHARGE, HEAT, DGTD and FEEM
device [options]
-
INTERCONNECT
interconnect [options]
Options
filename
- optional, the filename Opens the specified simulation or project file.
-v
- optional, Outputs the product version number.
scriptFile.lsf
- optional, the script filename Opens the specified script file.
-safe-mode
- optional, Turn on the safe mode.
-trust-script
- optional, Turn off the safe mode.
-run <scriptfile>
..\fdtd-solutions.exe -run <scriptfile> <simulationfile>
- optional, Run the specified script file.
- if a simulation file is required, this is added after the script file.
-nw
-hide
- optional, (-nw for FDTD only), -hide for other solvers.
- This hides the CAD window from appearing on the Desktop.
Notes: -nw and -hide command options
|
-use-solve
-run script.lsf -use-solve
- optional, used to run scripts in non-interactive engine mode.
- added after the -run and script file options.
- this will run the script based on the configuration in the CAD.
- see this page for details.
Notes: -use-solve command options
|
-logall
- optional, Generates a log file for each simulation or sweep.
-exit
- optional, 'Exit' the application after running the script file.
-o
- Change the location where log files are saved.
- All log files will be saved to the relative or absolute directory passed to -o.
- If the directory ends with .log this will be treated as a file name. Useful when running INTERCONNECT with the -logall option.
Examples
Running a script while 'hiding' the CAD window and saving the log file on a different location.
"C:\Program Files\Lumerical\[[verpath]]\bin\interconnect.exe" -hide -run scriptfile.lsf -logall -o "C:\temp\logfiles\"
Running a script with a simulation file while 'hiding' the CAD window and disabling safe mode.
"C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-solutions" -nw -trust-script -run scriptfile.lsf simulationfile.fsp
Opening a simulation file.
"C:\Program Files\Lumerical\[[verpath]]\bin\mode-solutions.exe" simulationfile.lms
"C:\Program Files\Lumerical\[[verpath]]\bin\device.exe" simulationfile.ldev
Run simulations without using MPI
- FDTD
fdtd-engine [options]
- FDE
fde-engine [options]
- EME
eme-engine [options]
- varFDTD
varfdtd-engine [options]
- CHARGE
device-engine [options]
- HEAT
thermal-engine [options]
- DGTD
dgtd-engine [options]
- FEEM
feem-engine [options]
- MQW
mqw-engine [options]
Options
filename
- required, The name of the simulation or project file to run.
-t
- optional, Controls the number of threads used. If not used or left blank, it will use all available threads.
-fullinfo
- optional, It will print more detailed time benchmarking information to the log file based on wall time and CPU time measurements.
-log-stdout
- optional, Redirects the log file data to the standard output, rather than saving it to file.
- This option will be ignored when the simulation runs in graphical mode.
-mesh-only
- optional, Mesh the geometry without running the simulation.
-inmaterialfile <file>
- optional, Load simulation mesh data from a <file>.
-outmaterialfile <file>
- optional, Save simulation mesh data to <file> for use on another project.
-logall
- optional, Create a log file for each simulation or sweep.
- Logfiles are named filename_p0.log, filename_p1.log, filename_p2.log
- By default, only filename_p0.log is created.
-mr
Note: To be executed using Git bash on Windows.
- optional, Print a simple memory usage report for a given simulation file to the standard output. Output can be piped or saved as a text file.*
-o
- optional, Change the location that log files are saved to.
- All log files will be saved to the relative or absolute directory passed to -o.
- If the directory ends with .log the last section will be treated as a file name.
-resume
- optional, available for FDTD simulations only.
- Resumes the simulation from the last check point.
- If no check point is found it will start the simulation job from the beginning.
- Enable the simulation checkpoint feature in the "Advanced Options" of the FDTD Solver object.
Examples
Run a simulation using 12 threads (cores).
"C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine-msmpi.exe" -t 12 "C:\temp\example.fsp"
Running with 4 threads (cores) and saving the log files into a specific path.
"C:\Program Files\Lumerical\[[verpath]]\bin\varfdtd-engine.exe" -t 4 "C:\temp\example.lms" -o "C:\temp\logfiles\"
Running on the local computer with the -resume flag when check point is enabled in FDTD.
"C:\Program Files\Lumerical\[[verpath]]\bin\varfdtd-engine.exe" -t 4 -resume "\path\simulationfile.fsp"
Running simulations via the MPI
Using MPI to run the simulation job with the solver is done for the following used cases:
- Run several simulations at the same time on different machines or nodes. (Concurrent computing)
- Using several machines to run 1 single simulation to take advantage of their memory (RAM) as required by the simulation. (Distributed computing)
- Launch and simulate from a local machine to a different remote machine or node.
MPI is a complex application with many configuration options and versions. On Windows, Microsoft MPI, and Intel MPI are the supported MPI frameworks.
General MPI Syntax
mpiexec [mpi_options] solver [solver_options]
MPI Options
-n <#>
- FDTD, varFDTD, and EME specify the <#> number of mpi processes
-hosts <hostlist>
- FDTD, varFDTD, and EME or send jobs across multiple computers.
-hosts <hostfile>
- Overrides the '-n' option.
Where - hostlist: comma-separated list of hosts or IPs with the corresponding number of processes
- hostfile: text file with 1 hostname/IP per line, with corresponding number of processes separated by a comma ':'
-nice -n19
- all solvers specify the process priority for load balancing.
-affinity
- only on Microsoft MPI, all solvers
- Microsoft MPI feature to lock processes to specific cores.
For additional information on MPI options, consult the MPI product documentation for further details:
"C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -help
OR
"C:\Program Files (x86)\IntelSWToolsMPI\mpi\2018.4.274\intel64\bin\mpiexec.exe" -help
Supported MPI variants
It is necessary to use the version of the solver that is matched to the version of MPI being used to run the solver. See the list below for details.
Microsoft MPI
The following executables are included with the installation and are suitable for use with Microsoft MPI.
- fdtd-engine-msmpi
- eme-engine-msmpi
- varfdtd-engine-msmpi
- device-engine.exe
- dgtd-engine.exe
- fd-engine.exe
- feem-engine.exe
- mqw-engine.exe
- rcwa-engine.exe
- thermal-engine.exe
Intel MPI
This is included with the installation and is used to run locally or remote machines that have more than 64 cores or multiple CPU/NUMA nodes.
- fdtd-engine-impi
- eme-engine-impi
- varfdtd-engine-impi
- device-engine.exe
- dgtd-engine.exe
- fd-engine.exe
- feem-engine.exe
- mqw-engine.exe
- rcwa-engine.exe
- thermal-engine.exe
Simple Multi-purpose Daemon (SMPD)
The SMPD service is part of the MPI (Message Passing Interface) Library process management system utility used for running simulations. SMPD is an application that runs in the background of each computer. Your local machine will send a request to the SMPD on the remote computer, asking it to start the simulation on that remote computer.
Microsoft MPI
- The Microsoft MPI version of SMPD does not start automatically when the computer starts. This is why Microsoft MPI is not recommended for remote job launching.
- Log into each computer used to run the simulation and run SMPD in debug mode.
Open a command prompt and execute:
"C:\Program Files\Microsoft MPI\Bin\smpd.exe" -debug
Intel MPI
- This version of SMPD also starts automatically when the computer starts.
- Register your user credentials from the local machine/localhost with Intel MPI.
Examples
Microsoft MPI
- 32 mpi processes with the -resume argument with checkpointing in FDTD.
"C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -n 32 "C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine-msmpi.exe" -t 1 -resume "\path\simulationfile.fsp"
- 12 mpi processes
"C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -n 12 "C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine-msmpi.exe" -t 1 "C:\temp\simulation.fsp"
- 8 mpi processes with job priority = 1
"C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -n 8 -priority 1 "C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine-msmpi.exe" "C:\temp\example.fsp"
Intel MPI
- 4 mpi processes on the "localhost"
"C:\Program Files (x86)\IntelSWToolsMPI\mpi\2018.4.274\intel64\bin\mpiexec.exe" -n 4 "C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine-impi.exe" -t 1 "C:\temp\simulationfile.fsp"
- distributed across 2 computers with 8 mpi processes each
"C:\Program Files (x86)\IntelSWToolsMPI\mpi\2018.4.274\intel64\bin\mpiexec.exe" -n 16 -hosts 2 node1 node2 "C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine-impi.exe" -t 1 "C:\temp\simulation.fsp"
Pipe standard output to a text file
- The standard output does not appear in the Command Prompt window. In order to see the report you can simply pipe the output to a text file using the piping command ">".
- For example, to output the engine version number or memory usage report to a file, use the following syntax.
"C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine.exe" -v > "C:\temp\engine_version.txt"
"C:\Program Files\Lumerical\[[verpath]]\bin\fdtd-engine.exe" -mr "C:\temp\example.fsp" > "C:\temp\mem_reg.txt"
CPi - MPI test program
- This test application allows users to ensure that MPI is properly configured, without the additional complication of running any Lumerical solver.
- For example, this avoids any potential problems with product licensing, since both MPI and CPI are not licensed features.
Run CPi using 4 processes on the local computer:
"C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -n 4 "C:\Program Files\Lumerical\FDTD\mpitest\cpi-msmpi.exe"
The output of the CPI test should look something like this:
Process 2 on localhost
Process 1 on localhost
Process 3 on localhost
Process 0 on localhost
pi is approximately 3.1416009869231249, Error is 0.0000083333333318
wall clock time = 0.000049