Job Submission Script
There are 2 important halves to a job scheduler. The first half is where you set up the environment in which the job will run, and the second half is the command to run the job. Below, we leverage the 'mpirun' scripts provided by IntelMPI and OpenMPI. Since we set the environment before executing, there is no need to provide the full path to 'mpirun'. These scripts will automatically read the environment variables set by the job scheduler and execute the appropriate 'mpiexec' command.
AWS ParallelCluster Example:
#!/bin/sh
module load intelmpi
mpirun /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -logall -fullinfo fdtd_100mb.fsp
Example 2:
'mpivars.sh' and similar scripts can be found in many MPI distributions. This script will set the appropriate MPI variables for its MPI distribution.
#!/bin/sh
source /opt/intel_2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/bin/mpivars.sh
mpirun /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -logall -fullinfo fdtd_100mb.fsp
Example 3 (Advanced):
You can use modules to manage multiple versions of Lumerical Software. Modules are a common tool used in cluster administration to handle job environments. You can find more information on how to create modules here: http://www.admin-magazine.com/HPC/Articles/Environment-Modules
#!/bin/sh
module load intel-mpi
module load lumerical-2019b
mpirun fdtd-engine-impi-lcl -logall -fullinfo fdtd_100mb.fsp
Run command
Slurm:
sbatch -N {nodes} --ntasks-per-node={ppn} {submit.sh}
Torque:
qsub -l nodes={n}:ppn={ppn} {submit.sh}
SGE:
qsub -pe mpi {nodes*ppn} {submit.sh}
IBM Platform/Spectrum LSF:
bsub -Ip -n {nodes*ppn} {submit.sh}
Bonus:
You can also submit jobs with a single command:
printf '#!/bin/sh\nmodule load intelmpi\nmpirun /opt/lumerical/[[verpath]]/bin/fdtd-engine-impi-lcl -logall -fullinfo fdtd_1000000mb.fsp' | {job_scheduler_command}