G0W0 calculation of a large system

Concerns issues with computing quasiparticle corrections to the DFT eigenvalues - i.e., the self-energy within the GW approximation (-g n), or considering the Hartree-Fock exchange only (-x)

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano

Post Reply
TusharWaghmre
Posts: 11
Joined: Mon Mar 11, 2024 2:26 pm

G0W0 calculation of a large system

Post by TusharWaghmre » Mon May 26, 2025 2:27 pm

Dear Yambo Community,

I am performing a G0W0 calculation for a large 2D carbon-allotrope system using Yambo-5.2.4. Due to the large system size, I have implemented a parallelization scheme and split the calculations across multiple bands using QPKrange, with the intention of merging the databases afterward. However, the calculations are progressing too slowly and fail to complete within the 72-hour job runtime limit.

Below is the parallelization setup I am currently using:

Code: Select all

PAR_def_mode= "memory"           # [PARALLEL] Default distribution mode
X_and_IO_CPU= "1 1 1 16 5"       # [PARALLEL] CPUs for each role
X_and_IO_ROLEs= "q g k c v"      # [PARALLEL] CPUs roles (q,g,k,c,v)
X_and_IO_nCPU_LinAlg_INV=-1      # [PARALLEL] CPUs for Linear Algebra (automatic)
DIP_CPU= "1 16 5"                # [PARALLEL] CPUs for each role
DIP_ROLEs= "k c v"               # [PARALLEL] CPUs roles (k,c,v)
SE_CPU= "q qp b"                 # [PARALLEL] CPUs for each role
SE_ROLEs= "1 1 80"               # [PARALLEL] CPUs roles (q,qp,b)
I have attached the input and report files for reference [attached below]. The calculations are distributed over a couple of bands per job using QPKrange, but the runtime remains a bottleneck.

Could you please suggest any strategies to optimize the calculation? For example, are there adjustments to the parallelization scheme, alternative distribution modes, or other parameters that could improve performance for this system?

Best regards,
Tushar Waghmare
You do not have the required permissions to view the files attached to this post.
Tushar Waghmare
Graduate M.Tech Student
Department of Metallurgical and Materials Engineering
IIT Kharagpur, India - 721302

User avatar
Daniele Varsano
Posts: 4229
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: G0W0 calculation of a large system

Post by Daniele Varsano » Tue May 27, 2025 7:40 am

Dear Tushar,

your system has many k points but does not seem extremely large. I cannot see a particular problem in your input file and the parallelization is ok, so I'm wondering if there is some problem in your compilation that make the job so slow.

Anyway, you have successfully calculated the dipoles and screening (ndb.pp_fragment_*), you do not need to repeat this part of the calculation if you place the databases in the SAVE directory or indicate the directory where they are with -J.

Having 200 bands in the GW summation, the following is more balanced choice (but memory intensive).

Code: Select all

SE_CPU= "q qp b"                       # [PARALLEL] CPUs for each role
SE_ROLEs= "1 2 40"  
This calculations should be quite fast, if they are not reasonable please post one of your log files and eventually your config.log.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

TusharWaghmre
Posts: 11
Joined: Mon Mar 11, 2024 2:26 pm

Re: G0W0 calculation of a large system

Post by TusharWaghmre » Mon Jun 02, 2025 10:35 pm

Dear Daniele,

Thank you for the previous suggestions. I have applied the recommended changes and re-run YAMBO, but I’m still encountering the same formatted-read error. I have attached a ZIP file containing the details of my setup, the exact error message, and the relevant input and log files. Any further guidance would be greatly appreciated.

Sincerely,
Tushar Waghmare
You do not have the required permissions to view the files attached to this post.
Tushar Waghmare
Graduate M.Tech Student
Department of Metallurgical and Materials Engineering
IIT Kharagpur, India - 721302

User avatar
Daniele Varsano
Posts: 4229
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: G0W0 calculation of a large system

Post by Daniele Varsano » Tue Jun 03, 2025 8:01 am

Dear Tushar,

I can see anything wrong i n your input file and the calculation should not be critical.
I strongly suggest you to update to a more recent version of the code (5.3) and run again the calculations.
If it fails please post here the details and we will try to reproduce your problem.

Best,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Post Reply