The guidelines shows the process on changing the multi threading settings for the Multiphysics (Finite Element IDE) solvers; CHARGE, HEAT, DGTD, FEEM, MQW and (Finite Difference IDE) solvers; RCWA and FDE.
Unlike the FDTD, varFDTD and EME solvers, the above mentioned solvers run with a single MPI process. It is not possible to distribute the simulation across several machines.
Running the simulation in parallel on all available cores on the local machine is set by the number of threads (see changing the thread count below) in the CAD/IDE. By default, it will try to run on all cores on the local machine.
Multiple runs of the solver (called "jobs") can be run concurrently. When using the parameter sweep utility, optimization utility, or addjob/runjobs script commands. The maximum number of jobs that run concurrently is set by the Capacity field of the Resources table. For full resource utilization, it is generally best to set the total number of jobs to be a multiple of the Capacity field.
- Open Resources.
- Select and "Edit" the resource.
- Uncheck "use processor binding when available" when Remote: Microsoft MPI or Remote: Intel MPI is selected as the Job launching preset.
- Otherwise, use Local Computer as the Job launching preset.
- Set the Threads to the less than or equal to the number of cores on your computer.
- Save and close the Resource configuration utility.
Changing the thread count on FD IDE
The default multithreading setting for the Multiphysics solvers (FE IDE) is 'Let Solver Choose', which runs the simulation using all of the available cores on the local machine. If you do not want to run using all cores, change the "thread count" to less than the total number of cores on the computer.
- Open your simulation and Edit the simulation object/solver properties
Go to the Advanced tab and on the multithreading drop down option, choose set thread count and specify the number of threads.
Click OK to save and apply your changes.
Run from the command line
Run the simulation with the desired number of threads using the "-t #" argument. The "-t #" argument overrides the multithreading settings in the simulation.
Example shown using the current release's default install path without using MPI.
Windows: CHARGE simulation using 8 threads
"C:\Program Files\Lumerical\[[verpath]]\bin\device-engine.exe" -t 8 "simulationfile.ldev"
Linux: HEAT simulation using 4 threads
/opt/lumerical/[[verpath]]/bin/thermal-engine -t 4 "simulationfile.ldev"