strange allocation of nodes on the calculations using yambo-5.0.1
Posted: Wed Apr 07, 2021 2:14 pm
I recently used yambo-5.0.1 to conduct my excitonic calculations using BSE. The pbs submission script is as follows:
"#!/bin/sh
#BS -l nodes=2:ppn=48
#PBS -l walltime=48:00:00
#PBS -q batch
#PBS -V
#PBS -S /bin/bash
module load yambo/5.0.1-hdf-sp-mix
cd $PBS_O_WORKDIR
NP=`cat $PBS_NODEFILE | wc -l`
NN=`cat $PBS_NODEFILE | sort | uniq | tee /tmp/nodes.$$ | wc -l`
cat $PBS_NODEFILE > /tmp/nodefile.$$
mpirun -rdma -machinefile /tmp/nodefile.$$ -np $NP yambo -F ./Inputs/ljbse -J ljbse -C ljbse >$PBS_JOBID.log>log"
Today, I encountered a very strange issue: as above, I allocated 2 nodes 96 cores to do this calculation, however, the actually called number of nodes is 1 (48 cores). In fact, no matter how many nodes I set in the submission script, the number of nodes in the final calculation is on one node. Is it related to any setting? The above PBS submission script works well for yambo-4.5.3. It troubled me very much, could you help me to spot and fix it? The configure options are as follows:
./configure FC=ifort F77=ifort --enable-yaml-output --enable-par-linalg --enable-mpi --enable-open-mp --enable-memory-profile --enable-uspp --enable-netcdf-hdf5 --enable-hdf5-compression --enable-hdf5-p2y-support --enable-hdf5-par-io --enable-logging --enable-memory-profile --enable-time-profile --enable-debug-flags --with-blas-libs="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core" --with-lapack-libs="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core"
Another question: as I known, the 1st step for BSE calculation is to calculate the static inverse Dielectric Matrix, but I found this option "-b" has been removed out. Does it mean that the new BSE calculation does not need the static inverse Dielectric Matrix anymore? Thanks a lot
"#!/bin/sh
#BS -l nodes=2:ppn=48
#PBS -l walltime=48:00:00
#PBS -q batch
#PBS -V
#PBS -S /bin/bash
module load yambo/5.0.1-hdf-sp-mix
cd $PBS_O_WORKDIR
NP=`cat $PBS_NODEFILE | wc -l`
NN=`cat $PBS_NODEFILE | sort | uniq | tee /tmp/nodes.$$ | wc -l`
cat $PBS_NODEFILE > /tmp/nodefile.$$
mpirun -rdma -machinefile /tmp/nodefile.$$ -np $NP yambo -F ./Inputs/ljbse -J ljbse -C ljbse >$PBS_JOBID.log>log"
Today, I encountered a very strange issue: as above, I allocated 2 nodes 96 cores to do this calculation, however, the actually called number of nodes is 1 (48 cores). In fact, no matter how many nodes I set in the submission script, the number of nodes in the final calculation is on one node. Is it related to any setting? The above PBS submission script works well for yambo-4.5.3. It troubled me very much, could you help me to spot and fix it? The configure options are as follows:
./configure FC=ifort F77=ifort --enable-yaml-output --enable-par-linalg --enable-mpi --enable-open-mp --enable-memory-profile --enable-uspp --enable-netcdf-hdf5 --enable-hdf5-compression --enable-hdf5-p2y-support --enable-hdf5-par-io --enable-logging --enable-memory-profile --enable-time-profile --enable-debug-flags --with-blas-libs="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core" --with-lapack-libs="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core"
Another question: as I known, the 1st step for BSE calculation is to calculate the static inverse Dielectric Matrix, but I found this option "-b" has been removed out. Does it mean that the new BSE calculation does not need the static inverse Dielectric Matrix anymore? Thanks a lot