yambo-5.0.4 on Cray XC40 system

Having trouble compiling the Yambo source? Using an unusual architecture? Problems with the "configure" script? Problems in GPU architectures? This is the place to look.

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan, Nicola Spallanzani

Forum rules
If you have trouble compiling Yambo, please make sure to list:
(1) the compiler (vendor and release: e.g. intel 10.1)
(2) the architecture (e.g. 64-bit IBM SP5)
(3) if the problems occur compiling in serial/in parallel
(4) the version of Yambo (revision number/major release version)
(5) the relevant compiler error message
Post Reply
Xiaoming Wang
Posts: 67
Joined: Fri Dec 18, 2020 7:14 am

yambo-5.0.4 on Cray XC40 system

Post by Xiaoming Wang » Mon Oct 11, 2021 2:11 am

Hello,

I'm testing my yambo-5.0.4 installation on Cray XC40 system. I tried RPA optical properties (yambo -o c) of silicon. The calculated eps and eel files all contain NaN. However, if I disable the [r, Vnl] part, the results are fine. Does the evaluation of [r, Vnl] use the libxc? I suspect my libxc compilation has some problem.

My configure:

Code: Select all

./configure \
  MPIFC=ftn \
  MPIF77=ftn \
  FC=ifort \
  F77=ifort \
  MPICC=cc \
  CC=icc \
  CPP="cpp -P" \
  --enable-hdf5-par-io \
  --enable-par-linalg \
  --with-fft-libs="-mkl" \
  --with-blas-libs="-L$MKLROOT -lmkl_intel_lp64  -lmkl_sequential -lmkl_core " \
  --with-lapack-libs="-L$MKLROOT -lmkl_intel_lp64  -lmkl_sequential -lmkl_core " \
  --with-scalapack-libs="-L$MKLROOT -lmkl_scalapack_lp64 " \
  --with-blacs-libs="-L$MKLROOT -lmkl_blacs_intelmpi_lp64 " \
  --with-hdf5-path=$HDF5_DIR \
  --with-netcdf-path=$NETCDF_DIR \
  --enable-open-mp \
  --enable-time-profile \
  --enable-memory-profile \
  --enable-msgs-comps
My configure log file
config.log
Best,
Xiaoming
You do not have the required permissions to view the files attached to this post.
Xiaoming Wang
The University of Toledo

andrea.ferretti
Posts: 206
Joined: Fri Jan 31, 2014 11:13 am

Re: yambo-5.0.4 on Cray XC40 system

Post by andrea.ferretti » Mon Oct 11, 2021 7:25 am

Dear Xiaoming,

the optics run-level does not make use of libxc. Most likely the problem is related to the calculation of dipoles.
In order to confirm, can you try to use ncdum and inspect the content of the yambo databases ?
something like:

Code: Select all

  ncdump  ndb.dipoles_fragment_1
do you see real numbers there or not ?

Then, to go any further, I think you may need to send the input files needed to reproduce the problem.
Any other comments concerning the problem are welcome.

Andrea
Andrea Ferretti, PhD
CNR-NANO-S3 and MaX Centre
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322; Skype: andrea_ferretti
URL: http://www.nano.cnr.it

Xiaoming Wang
Posts: 67
Joined: Fri Dec 18, 2020 7:14 am

Re: yambo-5.0.4 on Cray XC40 system

Post by Xiaoming Wang » Mon Oct 11, 2021 7:45 am

Dear Andrea,

Thanks for your suggestions. Here is the beginning of the ndb.dipoles file

Code: Select all

netcdf ndb {
dimensions:
        D_0000000003 = 3 ;
        D_0000000001 = 1 ;
        D_0000000002 = 2 ;
        D_0000000004 = 4 ;
        D_0000000011 = 11 ;
        D_0000000100 = 100 ;
        D_0000000008 = 8 ;
variables:
        float HEAD_VERSION(D_0000000003) ;
        float HEAD_REVISION(D_0000000001) ;
        float SERIAL_NUMBER(D_0000000001) ;
        float SPIN_VARS(D_0000000002) ;
        float HEAD_R_LATT(D_0000000004) ;
        float HEAD_WF(D_0000000001) ;
        float FRAGMENTED(D_0000000001) ;
        float TEMPERATURES(D_0000000002) ;
        float PARS(D_0000000011) ;
        char APPROACH(D_0000000001, D_0000000100) ;
        char KINDS(D_0000000001, D_0000000100) ;
        char WAVE_FUNC_XC(D_0000000001, D_0000000100) ;
        float DIP_iR(D_0000000001, D_0000000008, D_0000000008, D_0000000008, D_0000000003, D_0000000002) ;
        float DIP_P(D_0000000001, D_0000000008, D_0000000008, D_0000000008, D_0000000003, D_0000000002) ;
        float DIP_v(D_0000000001, D_0000000008, D_0000000008, D_0000000008, D_0000000003, D_0000000002) ;
data:

 HEAD_VERSION = 5, 0, 4 ;

 HEAD_REVISION = 19595 ;

 SERIAL_NUMBER = 7703 ;

 SPIN_VARS = 1, 1 ;

 HEAD_R_LATT = 8, 8, 8, 8 ;

 HEAD_WF = 725 ;

 FRAGMENTED = 1 ;

 TEMPERATURES = 0, 0 ;

 PARS = 1, 8, 8, 1, -0.03674932, -0.03674932, 725, 1, 0, 0, _ ;

 APPROACH =
  "G-space v                                                                                           " ;

 KINDS =
  "R V P                                                                                               " ;

 WAVE_FUNC_XC =
  "Perdew, Burke & Ernzerhof(X)+Perdew, Burke & Ernzerhof(C)                                           " ;

 DIP_iR =
  0, 0,
  0, 0,
  0, 0,
  NaNf, 7032317,
  NaNf, -4.138217e+31,
  NaNf, 3.633074e+27,
  NaNf, -2.17826e+34,
  NaNf, NaNf,
  NaNf, -8.234444e+36,
  NaNf, -1.615762e+38,
  NaNf, 2.900094e+30,
  NaNf, 4.231533e+34,
  NaNf, -5.340305e+19,
  NaNf, 2.120004e+35,
  NaNf, 3.092986e+27,
  NaNf, -2.054787e+32,
  NaNf, -17293.31,
  NaNf, -2.524569e+30,
  NaNf, -4.058295e+36,
  NaNf, 1.419916e+14,
  NaNf, -7.108337e+28,
  NaNf, 6.101875e+24,
  NaNf, -1565.716,
  NaNf, Infinityf,
  NaNf, 7032317,
  NaNf, -4.138217e+31,
  NaNf, 3.633074e+27,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  NaNf, 1.41281e+30,
  NaNf, -9.411717e+22,
  NaNf, -1.233786e+37,
  NaNf, 1.975341e+30,
  NaNf, Infinityf,
  NaNf, Infinityf,
  NaNf, 3.459008e+30,
  NaNf, -2.745708e+22,
  NaNf, -4783920,
  NaNf, 1.024004e+30,
  NaNf, -1.718405e+15,
  NaNf, -1.080253e+13,
  NaNf, -2.17826e+34,
  NaNf, NaNf,
  NaNf, -8.234444e+36,
  -0, 0,
  -0, 0,
  -0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  0, 0,
  NaNf, 1.395455e+30,
  NaNf, 4.552217e+21,
  NaNf, 1.198692e+19,
  NaNf, -7.286745e+37,
  NaNf, 2.13818e+09,
  NaNf, -2.55966e+20,
  NaNf, -2.721015e+35,
  NaNf, 8.323434e+36,
  NaNf, -4.707813e+25,
  NaNf, -4.648105e+33,
  NaNf, Infinityf,
  NaNf, -8.04895e+22,
  NaNf, -1.615762e+38,
  NaNf, 2.900094e+30,
  NaNf, 4.231533e+34,
  -0, 0,
  -0, 0,
  -0, 0,
  -0, 0,
  -0, 0,
  -0, 0,
  0, 0,
As can be seen, the dipoles are not correct.
My scf input is

Code: Select all

&CONTROL
  calculation = 'scf'
  etot_conv_thr =   2.0000000000d-05
  forc_conv_thr =   1.0000000000d-04
  outdir = './out/'
  prefix = 'si'
  pseudo_dir = '~/soft/pseudo/DOJO/'
  tprnfor = .true.
  tstress = .true.
  verbosity = 'high'
/
&SYSTEM
  ecutwfc =  24
  ibrav = 0
  nat = 2
  nosym = .true.
  noinv = .true.
  ntyp = 1
  occupations = 'fixed'
  nbnd = 8
/
&ELECTRONS
  conv_thr =   4.0000000000d-10
  electron_maxstep = 80
  mixing_beta =   4.0000000000d-01
/
ATOMIC_SPECIES
Si     28.0855 Si.upf
ATOMIC_POSITIONS crystal
Si           0.0000000000       0.0000000000       0.0000000000
Si           0.2500000000       0.2500000000       0.2500000000
K_POINTS automatic
2 2 2 0 0 0
CELL_PARAMETERS angstrom
      2.7154800000       2.7154800000       0.0000000000
      2.7154800000       0.0000000000       2.7154800000
      0.0000000000       2.7154800000       2.7154800000
and the yambo input is

Code: Select all

optics                           # [R] Linear Response optical properties
infver                           # [R] Input file variables verbosity
chi                              # [R][CHI] Dyson equation for Chi.
dipoles                          # [R] Oscillator strenghts (or dipoles)
DIP_Threads=0                    # [OPENMP/X] Number of threads for dipoles
X_Threads=0                      # [OPENMP/X] Number of threads for response functions
Chimod= "IP"                     # [X] IP/Hartree/ALDA/LRC/PF/BSfxc
% DipBands
  1 |  8 |                           # [DIP] Bands range for dipoles
%
DipBandsALL                   # [DIP] Compute all bands range, not only valence and conduction
DipApproach= "G-space v"         # [DIP] [G-space v/R-space x/Covariant/Shifted grids]
DipComputed= "R P V"             # [DIP] [default R P V; extra P2 Spin Orb]
#DipPDirect                    # [DIP] Directly compute <v> also when using other approaches for dipoles
% QpntsRXd
 1 | 1 |                             # [Xd] Transferred momenta
%
% BndsRnXd
  1 |  8 |                           # [Xd] Polarization function bands
%
% EnRngeXd
 0.000000 | 9.999999 |         eV    # [Xd] Energy range
%
% DmRngeXd
 0.100000 | 0.100000 |         eV    # [Xd] Damping range
%
ETStpsXd= 1000                    # [Xd] Total Energy steps
% LongDrXd
 1.000000 | 0.000000 | 0.000000 |        # [Xd] [cc] Electric Field
%
Best,
Xiaoming
Xiaoming Wang
The University of Toledo

andrea.ferretti
Posts: 206
Joined: Fri Jan 31, 2014 11:13 am

Re: yambo-5.0.4 on Cray XC40 system

Post by andrea.ferretti » Sat Oct 16, 2021 2:07 pm

Dear Xiaoming,

indeed, the problem is then in the dipoles.
Using your input files (scf and yambo), I tried to reproduce the error, but couldn't make it (the code is working).
I have tried yambo 5.0.4 + QE 6.7, compiled using both gfortran (7.3.1) and intel ifort (19.1.0.166).
The only difference (perhaps) wrt your workflow is that I have run scf + nscf + p2y + yambo (while according to your scf input file it seems you've run yambo just on top of it... it shouldn't really matter, though).

My best guess is then that the problem may be related to a miscompilation (issues with the cray compiler ?)...

take care
Andrea
Andrea Ferretti, PhD
CNR-NANO-S3 and MaX Centre
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322; Skype: andrea_ferretti
URL: http://www.nano.cnr.it

Xiaoming Wang
Posts: 67
Joined: Fri Dec 18, 2020 7:14 am

Re: yambo-5.0.4 on Cray XC40 system

Post by Xiaoming Wang » Sun Oct 17, 2021 4:35 pm

Dear Andrea,

Thanks for your information. It could be some miscompilations. Basically, I'm working on NERSC cori system which is CRAY XC40. I managed to get yambo work using the PrgEnv-gnu environment under which everything are compiled with gnu compilers. For intel compilers, I was always confronted with errors while running. So I'm wondering if anyone has any successful experience on cori or other CRAY systems? If not, I'll stick to the gnu compilation.

Best,
Xiaoming
Xiaoming Wang
The University of Toledo

Post Reply