Page 1 of 1

NaN problem

Posted: Wed Jul 31, 2024 2:15 am
by yunho.ahn
Dear Yambo community,

Hello, I'm a beginner of Yambo and started following the tutorials after the compilation.
I calculated optical spectra at the independent particle level for bulk hBN.

https://www.yambo-code.eu/wiki/index.ph ... icle_level

Everything was fine to follow the instruction but I couldn't get the optical spectra as the calculated values were NaN.
Could you help me to figure out and solve the problems? Please find the file attached.

Code: Select all

# Version 5.2.3 Revision 22799 Hash (prev commit) bad66dc080
#                        Branch is
#            MPI+OpenMP+SLK+HDF5_MPI_IO Build
#                http://www.yambo-code.eu
#
# Absorption @ Q(1):  0.999999940      0.00000000      0.00000000    [q->0 direction]
#
# [ X ] Hartree size                              :  1
# [GEN] GF Energies                               : Slater exchange(X)+Perdew & Zunger(C)
# [GEN] Green`s Function                          : T-ordered
#
# [GEN] Gauge                                     : Length
# [GEN] [r,Vnl] included                          : yes
#
#    E[1] [eV]          Im(eps)            Re(eps)
#
        0.000000                   NaN                NaN
        0.010000                   NaN                NaN
        0.020000                   NaN                NaN
        0.030000                   NaN                NaN
        0.040000                   NaN                NaN
        0.050000                   NaN                NaN
        0.060000                   NaN                NaN
        0.070000                   NaN                NaN
        0.080000                   NaN                NaN
......

Best,
Yunho Ahn

Re: NaN problem

Posted: Sat Aug 03, 2024 10:00 am
by Daniele Varsano
Dear Yunho,

that's weird. Most probably there should be some problem with the compilation, eg. MPI libraries, or linear algebra.
Can you try the following tests:
1) Run the code in serial
if the problem is solved, please report here together with the config.log file, and we can have a look if there are some inconsistency with MPI libraries.

If the problem persists:
2) recompile the code using the option:
./configure --enable-int-linalg

and see if this solves the problem.

Best,
Daniele

Re: NaN problem

Posted: Tue Aug 06, 2024 8:15 am
by yunho.ahn
Dear Daniele,

I tried to recompile the code with the option that you recommended, but the problem still persists.
./configure --enable-int-linalg

Please find the config.log file attached.

Best,
Yunho

Re: NaN problem

Posted: Tue Aug 06, 2024 9:41 am
by Daniele Varsano
Dear Yunho,

have you tried to run a simple test in serial mode?

Best,

Daniele

Re: NaN problem

Posted: Tue Aug 06, 2024 10:31 am
by Davide Sangalli
Dear Yunho,
two suggestions:

1) is there any reason why you are using the configuration flag

Code: Select all

--enable-netcdf-classic
It is an old I/O mode, which is not much tested anymore. I'd suggest to remove it.

2) You are running on 16 OMP_THREADS and 4 MPI tasks. I assume the nodes of your cluster have at least 64 cores
While this is fine, using fewer threads might solve the issue.

Best,
D.