As of 2024 R1, MPICH2 is no longer packaged with the Lumerical installation package on Linux. For best performance and distributed solve capability, it is recommended to install either Open MPI or Intel MPI.
This document outlines how to install these packages and configure Ansys Lumerical to use them.
Important
- Root or sudo access is required for installing packages from your system package manager.
- For distributed computations (multiple nodes), you will need to either install these packages on all your nodes or make sure they are in a shared file system mount.
- Installation instructions are for the latest released versions of the Linux distributions used in the examples.
- Use only 1 MPI. Do Not Install both Open MPI and Intel MPI together on the same machine to avoid issues.
Intel MPI Installation
The following instructions will walk you through,
- Installing required packages,
- Adding the Intel MPI to your package manager,
- Installing Intel MPI, (example shows version 2019 of Intel MPI)
- Configuring your environment to run Lumerical.
RHEL/Rocky/Amazon Linux (yum package manager)
sudo yum install yum-utils
sudo yum-config-manager --add-repo https://yum.repos.intel.com/mpi/setup/intel-mpi.repo
sudo rpm --import https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
sudo yum install intel-mpi-rt-2019.9-304.x86_64
Ubuntu (apt package manager)
sudo wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor | sudo tee /usr/share/keyrings/intel-mpi-archive-keyring.gpg > /dev/null
sudo echo "deb [signed-by=/usr/share/keyrings/intel-mpi-archive-keyring.gpg] https://apt.repos.intel.com/mpi all main" | sudo tee /etc/apt/sources.list.d/intel-mpi.list
sudo apt-get update
sudo apt-get install -y intel-mpi-rt-2019.9-304
Configure your environment
The easiest way to configure your environment to use Intel MPI is to run the "mpivars.sh" script provided by the installation and export the UCX_TLS variable before running Lumerical simulations with Intel MPI.
source /opt/intel/impi/2019.9.304/intel64/bin/mpivars.sh
export UCX_TLS=ud,sm,self
You can set this permanently by "sourcing" these into your ".bashrc",
echo "source /opt/intel/impi/2019.9.304/intel64/bin/mpivars.sh" >> ~/.bashrc
echo "export UCX_TLS=ud,sm,self" >> ~/.bashrc
Copy libraries
Copy the "libfabric.so.1" and "libmpi.so.1" libraries into the Lumerical installation "lib" folder. The default install path for Ansys Lumerical [[ver]] version is shown below. Change the path according to the Lumerical installation on your machine.
sudo cp /opt/intel/impi/2019.9.304/intel64/libfabric/lib/libfabric.so.1 /opt/lumerical/[[verpath]]/lib/
sudo cp /opt/intel/impi/2019.9.304/intel64/lib/release/libmpi.so.12 /opt/lumerical/[[verpath]]/lib/
Open MPI installation
Lumerical solvers supports running with Open MPI version 3 and 4. The following instructions will walk you through,
- Installing Open MPI, (example shows verions 3 and 4 of Open MPI)
- Configuring your environment to run Lumerical.
RHEL/Rocky/Amazon Linux (yum package manager):
On RHEL 7 use the package name openmpi3, which will install version 3
sudo yum install openmpi3
On RHEL 8 and up use the package name openmpi, which will install version 4
sudo yum install openmpi
Ubuntu (apt package manager):
sudo apt-get install openmpi-bin
Configure your environment
When using Open MPI, the solver must be able to find the MPI runtime libraries. Assuming a default install location of "/usr/lib64/openmpi/lib", one way of configuring this to work would be to add that path to your LD_LIBRARY_PATH environment variable,
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/openmpi/lib
Adding the "/usr/lib64/openmpi/bin" location to your PATH environment variable will let you call "mpiexec" without specifying the entire path.
export PATH=/usr/lib64/openmpi/bin:$PATH
The above commands could be added to your bashrc to avoid running them again. Detailed documentation can be found on the Open MPI website, https://www.open-mpi.org/doc/.
Matching Your MPI to a Solver
It is necessary to use the version of the solver that is matched to the version of MPI being used to run the solver. The following table lists the engine executables to run with supported MPI.
Intel MPI
- fdtd-engine-impi-lcl
- varfdtd-engine-impi-lcl
- eme-engine-impi-lcl
OpenMPI
- fdtd-engine-ompi-lcl
- varfdtd-engine-ompi-lcl
- eme-engine-ompi-lcl
Running simulations with MPI
- Configure your environment as shown above for the MPI you will be using.
CAD/Design Environment
- Open "Resources" from the Lumerical CAD/GUI.
- Select and "Edit" the resource.
- Set "Custom" as the 'Job launching preset' in the Resource advanced options.
- Enter the MPI and the Lumerical engine path and executable in their corresponding "executable" fields.
- If the "executable" is not found, the text will remain red. Otherwise, if the correct path and binary is entered, the text will be black.
- Apply/Save your settings.
Open MPI
Intel MPI
Terminal/Command prompt
Shown using the default install path.
- Run FDTD using Intel MPI with 4 processes.
$ /opt/intel/impi/2019.9.304/intel64/bin/mpirun -n 4 /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -t 1 simulation.fsp
- Using OpenMPI to run varFDTD simulation
$ /usr/lib64/openmpi/bin/mpiexec -n 12 /opt/lumerical/[[verpath]]/bin/fdtd-engine-ompi-lcl -t 1 simulation.fsp
- Run a simulation distributed between 3 computers with Intel MPI, with different processes on each machine (4, 8, and 16) using 1 thread and creating a log file for each process. Note that the total number of processes is indicated with the "-n #" (-n 28).
$ /opt/intel/impi/2019.9.304/intel64/bin/mpirun -n 28 -hosts node01:4,hosts02:8,node03:16 /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -logall -t 1 simulationfile.fsp
Note: Use the IP address of the node instead of the HostName, in case it is not able to resolve the host names.
Remote jobs using Intel MPI
See this KB for running remote simulation jobs using Intel MPI.
See also
Running simulations using Terminal in Linux
Resource configuration elements and controls
Compute resource configuration use cases