The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Various technical topics such as parallelism and efficiency, netCDF problems, the Yambo code structure itself, are posted here.

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan, Nicola Spallanzani

Post Reply
Garden.Z
Posts: 3
Joined: Tue Apr 21, 2020 4:01 am
Location: Guangzhou

The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Post by Garden.Z » Tue Apr 21, 2020 8:20 am

Dear Experts,
I am starting to use yambo by following the example 23 in wannier90 tutorial, the QP energies calculation step with yambo code appears to be a memory eater. Previously a pure carbon system I calculated was always terminated until I change the node to another one with 128G memory. But this issue has occurred again in my present doped system, and the 128G node cannot worked again this time. My computational environment is 128G memory per node with 24 cores. I have tried one node and two or three parallel nodes, the feedbacks from technique support are always be memory problem. I really have no idea about how to solve this problem, the only hint I found in log file is this word: <01m-19s> P0010: Self_Energy parallel ENVIRONMENT is incomplete. Switching to defaults.
I have learned a little from here: http://www.yambo-code.org/wiki/index.ph ... n_parallel. The technique support suggests parallelism under MPI+openMP. In our policy, -N, -c, -n represent nodes, CPUs for per task, number of tasks respectively. The first thing is no matter what number I defined after -c, the thread is always be 1 ( * THREADS (max): 1 in r_em1d_ppa_HF_and_locXC_gw0 file, for example, set -N 1 -c 24 -n 1, the threads cannot reach to 24 but stay to 1.). Another thing is what I depicted in the first paragraph, that is 'memory overflow'. While the job can run when I use only one CPU on one node, apparently, this is very inefficiency. I am still unfamiliar about this code. Sincerely seek help. I have uploaded the directory of my jobs, and it not includes the SAVE directory from the previous step due to the large size.
taskfile.rar
You do not have the required permissions to view the files attached to this post.
Kan Zhang
State Key Laboratory of Optoelectronic Materials and Technologies, Nanotechnology Research Center, School of Materials Science and Engineering, Sun Yat-sen University
Guangzhou 510275, China

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Post by Daniele Varsano » Tue Apr 21, 2020 8:42 am

Dear Garden.z
Welcome to the yambo forum. First, let me ask you to sign your post with your name and affiliation. This is a rule of the forum and you can do once for all by filling the signature field in your user profile.

Now, from the report file, I can see that you successfully calculated the screening and code stops when evaluating the quasiparticle correction. Now the problem is that you are calculating 16000 QP (100 bands for 80 kpoints for 2 spin channels) correction at the same time: this is a big number!

Code: Select all

 QP @ K 1 - 80 : b 1 - 100
Here some suggestion:
1) Please check if this is really the number of the QP corrections you want to calculate. I can see from the input you aim to select some particular k point but in the input there are comments and it is not complete.
2) Use a defined parallelization strategy (activate ut using -V par when building up the input). You will have to fill these variables:

Code: Select all

SE_CPU= " 1 1 #ncpu"       # [PARALLEL] CPUs for each role
SE_ROLEs= "q qp b"         # [PARALLEL] CPUs roles (q,qp,b)
SE_Threads=  #nthreads    
You can have a look here for the meaning and usage of the variable governing the parallelization.

The parallelization over "b" allows distributing the memory. In order to use threads be sure you have compiled Yambo using the --enable-open-mp option.

3) In any case, if this does not solve your memory problem (and it is possible as 16000 correction is probably too much). You can split them in different runs and merge the QP databases at the end using the ypp utility. You can have a look to this post for an how to.

Best,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Garden.Z
Posts: 3
Joined: Tue Apr 21, 2020 4:01 am
Location: Guangzhou

Re: The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Post by Garden.Z » Tue Apr 21, 2020 10:33 am

Daniele Varsano wrote: Tue Apr 21, 2020 8:42 am Dear Garden.z
Welcome to the yambo forum. First, let me ask you to sign your post with your name and affiliation. This is a rule of the forum and you can do once for all by filling the signature field in your user profile.

Now, from the report file, I can see that you successfully calculated the screening and code stops when evaluating the quasiparticle correction. Now the problem is that you are calculating 16000 QP (100 bands for 80 kpoints for 2 spin channels) correction at the same time: this is a big number!

Code: Select all

 QP @ K 1 - 80 : b 1 - 100
Here some suggestion:
1) Please check if this is really the number of the QP corrections you want to calculate. I can see from the input you aim to select some particular k point but in the input there are comments and it is not complete.
2) Use a defined parallelization strategy (activate ut using -V par when building up the input). You will have to fill these variables:

Code: Select all

SE_CPU= " 1 1 #ncpu"       # [PARALLEL] CPUs for each role
SE_ROLEs= "q qp b"         # [PARALLEL] CPUs roles (q,qp,b)
SE_Threads=  #nthreads    
You can have a look here for the meaning and usage of the variable governing the parallelization.

The parallelization over "b" allows distributing the memory. In order to use threads be sure you have compiled Yambo using the --enable-open-mp option.

3) In any case, if this does not solve your memory problem (and it is possible as 16000 correction is probably too much). You can split them in different runs and merge the QP databases at the end using the ypp utility. You can have a look to this post for an how to.

Best,

Daniele
Thank you for the reply, Dear Daniele. Previously, I made the wrong place to fill the name and affiliation, sorry for the inconvenience.
In the example, the silicon needs 100 bands. And according to GW calculation, it seems that 50-100 bands are needed for each orbital.
How to add '-V par' to the input file? Is it in this form: #SBATCH -V par, and fill it into yambo.in file? This seems more like a word in .sh file, but I don't think the Linux server I rent support this.
I had compiled the yambo code from QE 6.4 more earlier, how can I see if the --enable-open-mp option is enabled? We do use this (module load fftw/3.3.8-icc-15-mpi) before submitting the job, is this what we talking about?
You do not have the required permissions to view the files attached to this post.
Kan Zhang
State Key Laboratory of Optoelectronic Materials and Technologies, Nanotechnology Research Center, School of Materials Science and Engineering, Sun Yat-sen University
Guangzhou 510275, China

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Post by Daniele Varsano » Tue Apr 21, 2020 10:55 am

Dear Kan,
How to add '-V par' to the input file? Is it in this form: #SBATCH -V par, and fill it into yambo.in file? This seems more like a word in .sh file, but I don't think the Linux server I rent support this.
No, the -V par option is needed to build the yambo.in file e.g.
yambo -p p -g n -V par
will create the yambo.in for a GW calculation adding the variables to tune the parallelism.
I had compiled the yambo code from QE 6.4 more earlier, how can I see if the --enable-open-mp option is enabled?
Ok, but I suggest you to download the more recent snapshot from the yambo website (yambo 4.5).
You can have a look to the ./config/report file and see if the OpenMP is marked: e,g,

Code: Select all

# - PARALLEL SUPPORT -
#
# [X] MPI
# [X] OpenMP
We do use this (module load fftw/3.3.8-icc-15-mpi) before submitting the job, is this what we talking about?
This seems to be the module for the fast Fourier transform, in general, you want to load all the modules you loaded to compile the code.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Garden.Z
Posts: 3
Joined: Tue Apr 21, 2020 4:01 am
Location: Guangzhou

Re: The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Post by Garden.Z » Wed Apr 22, 2020 7:01 am

Daniele Varsano wrote: Tue Apr 21, 2020 10:55 am Dear Kan,
How to add '-V par' to the input file? Is it in this form: #SBATCH -V par, and fill it into yambo.in file? This seems more like a word in .sh file, but I don't think the Linux server I rent support this.
No, the -V par option is needed to build the yambo.in file e.g.
yambo -p p -g n -V par
will create the yambo.in for a GW calculation adding the variables to tune the parallelism.
I had compiled the yambo code from QE 6.4 more earlier, how can I see if the --enable-open-mp option is enabled?
Ok, but I suggest you to download the more recent snapshot from the yambo website (yambo 4.5).
You can have a look to the ./config/report file and see if the OpenMP is marked: e,g,

Code: Select all

# - PARALLEL SUPPORT -
#
# [X] MPI
# [X] OpenMP
We do use this (module load fftw/3.3.8-icc-15-mpi) before submitting the job, is this what we talking about?
This seems to be the module for the fast Fourier transform, in general, you want to load all the modules you loaded to compile the code.

Best,
Daniele
MANY THANKS! Dear Daniele.
I have recompiled the code by following your instruction. Now I have noticed that probably I made the wrong place for the given bands for calculating the QP (insert the green table before '%' but not after '%', is it right?).
无标题.jpg
As depicted in the wannier tutorials, I feel ashamed that I actually not understanding why is to do so as the red labelled line say. Does it mean that it is a single shot GW calculation but not on GW0 level? If not include the green table, does it mean all bands will calculated for QP?
Is it proper that put blue and green tables together, or only leave one in this place?
All best,
Kan Zhang
You do not have the required permissions to view the files attached to this post.
Kan Zhang
State Key Laboratory of Optoelectronic Materials and Technologies, Nanotechnology Research Center, School of Materials Science and Engineering, Sun Yat-sen University
Guangzhou 510275, China

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: The frequently occurred 'Memory Overflow' when calculating Quasi-particle energies with yambo in GW calculations

Post by Daniele Varsano » Wed Apr 22, 2020 8:45 am

Dear Kan Zhang,
the input file you post is just an example that needs to be edited, you cannot use like that, it has not the correct syntax: it is full of comments that are not recognized by Yambo.

You should remove all the comments and use the QPkrange variable.
Here some examples:

Code: Select all

%QPkrange                      # # [GW] QP generalized Kpoint/Band indices
1|80|1|100|
%
Using this yambo calculates corrections for 100 bands and 80 kpoints.

Code: Select all

%QPkrange                      # # [GW] QP generalized Kpoint/Band indices
1|1|10|20|
%
Calculates bands from 10 to 20 for kpt=1 only

Code: Select all

%QPkrange                      # # [GW] QP generalized Kpoint/Band indices
1|1|10|20|
3|3|10|20|
5|5|10|20|
%
Calculates bands from 10 to 20 for kpt=1,3,5
and so on....

Here you can find a complete walkthrough for a GW calculation.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Post Reply