Dear Yambo developers,
I would like to report an issue with the latest release (v4.0.1).
To test the new version, I have run a small calculation to compare the parallel and serial performance of Yambo for the QP energies of Silicon.
The calculations are not supposed to be converged, thus I have only used 8 bands (4 occ, 4 empty), 6x6x6 k-point mesh (centered at Gamma), and small cutoffs
In particular, I have set:
EXXRLvcs= 1 RL
NGsBlkXp= 1 RL
All input/output files are attached.
The calculations with 4 CPUs (input file: yambo.4cpu.in; output files: o-4CPU.qp, r-4CPU_em1d_ppa_HF_and_locXC_gw0) and only 1 CPU (input file: yambo.1cpu.in; output files: o-1CPU.qp, r-1CPU_em1d_ppa_HF_and_locXC_gw0) give different quasiparticle energies. Both exchange and correlation energy seem to differ.
The calculations have been run with commands:
mpirun -np 4 yambo -I ../ -F yambo.4cpu.in -J 4CPU
yambo -I ../ -F yambo.1cpu.in -J 1CPU
Is there anything wrong with my input files?
Thank you in advance for your help.
Best,
Fabio Caruso
Inconsistent GW quasiparticle energies in v4.0.1
Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano
-
- Posts: 9
- Joined: Fri Jul 18, 2014 5:10 pm
Inconsistent GW quasiparticle energies in v4.0.1
You do not have the required permissions to view the files attached to this post.
Fabio Caruso
Department of Materials
University of Oxford
Parks Road
Oxford, OX1 3PH, UK
Department of Materials
University of Oxford
Parks Road
Oxford, OX1 3PH, UK
- Daniele Varsano
- Posts: 4198
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: Inconsistent GW quasiparticle energies in v4.0.1
Dear Fabio,
that's sound really strange, and something looks to fail in your parallel run.
Did you compile the code using OpenMPI, if so we experienced some problem that are solved by using the flag:
when configuring the code.
If this is not the case, can you also post your config.log file?
Best,
Daniele
that's sound really strange, and something looks to fail in your parallel run.
Did you compile the code using OpenMPI, if so we experienced some problem that are solved by using the flag:
Code: Select all
--enable-openmpi
If this is not the case, can you also post your config.log file?
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 9
- Joined: Fri Jul 18, 2014 5:10 pm
Re: Inconsistent GW quasiparticle energies in v4.0.1
Hi Daniele,
thanks for the quick reply. No, I am not using OpenMPI. I configured using the following command:
./configure \
F77=ifort \
FC=ifort \
FCFLAGS="-O3 -xW -assume bscc -nofor_main" \
--with-fft-path="/usr/lib" \
--with-iotk-path="/home/caruso/QE/espresso-5.0.2/iotk" \
--with-blas-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-lapack-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-blacs-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-scalapack-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential"
This is the message that I got at the end of the ./configure.
# [VER] 4.0.1 r.88
#
# - GENERAL CONFIGURATIONS -
#
# [SYS] linux@x86_64
# [SRC] /home/caruso/QE/espresso-5.0.2/yambo-4.0.1-rev.89
# [BIN] /home/caruso/QE/espresso-5.0.2/yambo-4.0.1-rev.89/bin
# [-] Double precision
# [X] Redundant compilation
# [-] Run-Time timing profile
#
# - PARALLEL SUPPORT -
#
# [X] MPI (not open-mpi kind)
# [-] OpenMP
# [-] Blue-Gene specific procedures
#
# - LIBRARIES (E=external library; I=internal library; -=not used;) -
#
# I/O
# [ E ] IOTK : /home/caruso/QE/espresso-5.0.2/iotk/src/libiotk.a (QE 5.0)
# [ - ] ETSF_IO:
# [ I ] NETCDF : -lnetcdff -lnetcdf (No large files support)
# [ - ] HDF5 :
#
# MATH
# [ I ] FFT : Internal Goedecker FFT with 0 cache
# [ E ] BLAS : -L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential
# [ E ] LAPACK : -L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential
# [ - ] SCALAPACK: -L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential
#
# OTHER
# [ I ] LibXC : -lxc
# [ - ] MPI library:
#
# - COMPILERS, MAKE and EDITOR -
#
# [ CPP ] gcc -E -P -D_NETCDF_IO -D_MPI -D_FFTSG
# [ C ] gcc -g -O2 -D_C_US -D_FORTRAN_US
# [MPICC] mpicc -g -O2 -D_C_US -D_FORTRAN_US
# [ F90 ] ifort -O3 -xW -assume bscc -nofor_main
# [MPIF ] mpif90 -O3 -xW -assume bscc -nofor_main
# [ F77 ] ifort -O3 -xW -assume bscc -nofor_main
# [Cmain] -nofor_main
# [NoOpt] -assume bscc -g -O0
#
# [ MAKE ] make
# [EDITOR] vim
I also attached the full config.log, which might contain some relevant information that I might have overlooked.
Thanks!
Best,
Fabio
thanks for the quick reply. No, I am not using OpenMPI. I configured using the following command:
./configure \
F77=ifort \
FC=ifort \
FCFLAGS="-O3 -xW -assume bscc -nofor_main" \
--with-fft-path="/usr/lib" \
--with-iotk-path="/home/caruso/QE/espresso-5.0.2/iotk" \
--with-blas-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-lapack-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-blacs-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-scalapack-libs="-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential"
This is the message that I got at the end of the ./configure.
# [VER] 4.0.1 r.88
#
# - GENERAL CONFIGURATIONS -
#
# [SYS] linux@x86_64
# [SRC] /home/caruso/QE/espresso-5.0.2/yambo-4.0.1-rev.89
# [BIN] /home/caruso/QE/espresso-5.0.2/yambo-4.0.1-rev.89/bin
# [-] Double precision
# [X] Redundant compilation
# [-] Run-Time timing profile
#
# - PARALLEL SUPPORT -
#
# [X] MPI (not open-mpi kind)
# [-] OpenMP
# [-] Blue-Gene specific procedures
#
# - LIBRARIES (E=external library; I=internal library; -=not used;) -
#
# I/O
# [ E ] IOTK : /home/caruso/QE/espresso-5.0.2/iotk/src/libiotk.a (QE 5.0)
# [ - ] ETSF_IO:
# [ I ] NETCDF : -lnetcdff -lnetcdf (No large files support)
# [ - ] HDF5 :
#
# MATH
# [ I ] FFT : Internal Goedecker FFT with 0 cache
# [ E ] BLAS : -L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential
# [ E ] LAPACK : -L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential
# [ - ] SCALAPACK: -L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential
#
# OTHER
# [ I ] LibXC : -lxc
# [ - ] MPI library:
#
# - COMPILERS, MAKE and EDITOR -
#
# [ CPP ] gcc -E -P -D_NETCDF_IO -D_MPI -D_FFTSG
# [ C ] gcc -g -O2 -D_C_US -D_FORTRAN_US
# [MPICC] mpicc -g -O2 -D_C_US -D_FORTRAN_US
# [ F90 ] ifort -O3 -xW -assume bscc -nofor_main
# [MPIF ] mpif90 -O3 -xW -assume bscc -nofor_main
# [ F77 ] ifort -O3 -xW -assume bscc -nofor_main
# [Cmain] -nofor_main
# [NoOpt] -assume bscc -g -O0
#
# [ MAKE ] make
# [EDITOR] vim
I also attached the full config.log, which might contain some relevant information that I might have overlooked.
Thanks!
Best,
Fabio
You do not have the required permissions to view the files attached to this post.
Fabio Caruso
Department of Materials
University of Oxford
Parks Road
Oxford, OX1 3PH, UK
Department of Materials
University of Oxford
Parks Road
Oxford, OX1 3PH, UK
-
- Posts: 9
- Joined: Fri Jul 18, 2014 5:10 pm
Re: Inconsistent GW quasiparticle energies in v4.0.1
Dear Daniele,
a quick update. I have reconfigured and recompiled yambo with the flag --enable-openmpi. Now the calculations with 1 and 4 CPU are in good agreement (discrepancies are of the order of ~1meV or smaller). Now, at the end of the configure summary I get:
(...)
# [X] MPI (open-mpi kind)
(...)
(Before I was getting # [X] MPI (not open-mpi kind)).
This seems to solve the issue for now.
Many thanks for your help.
Fabio
a quick update. I have reconfigured and recompiled yambo with the flag --enable-openmpi. Now the calculations with 1 and 4 CPU are in good agreement (discrepancies are of the order of ~1meV or smaller). Now, at the end of the configure summary I get:
(...)
# [X] MPI (open-mpi kind)
(...)
(Before I was getting # [X] MPI (not open-mpi kind)).
This seems to solve the issue for now.
Many thanks for your help.
Fabio
Fabio Caruso
Department of Materials
University of Oxford
Parks Road
Oxford, OX1 3PH, UK
Department of Materials
University of Oxford
Parks Road
Oxford, OX1 3PH, UK