Job teminated when considering higher KPOINTS

Concerns issues with computing quasiparticle corrections to the DFT eigenvalues - i.e., the self-energy within the GW approximation (-g n), or considering the Hartree-Fock exchange only (-x)

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano

Post Reply
Ponnappa
Posts: 4
Joined: Sat Sep 07, 2024 7:26 pm

Job teminated when considering higher KPOINTS

Post by Ponnappa » Sun Sep 08, 2024 5:56 pm

Dear YAMBO developers and users,
I was trying to converge the GW calculation with KPOINT mesh. However, the job gets truncated with a higher KPOINT(30x30x1) (Having a KPOINTS of 18x18x1 also have same issue). I am using 128 cores (4 node with 32 processor). The last lines written in the file read so.
[07] Local Exchange-Correlation + Non-Local Fock
================================================

[VXC] Plane waves : 22403
[EXS] Plane waves : 22403

QP @ state[ 1 ] K range: 1 1
QP @ state[ 1 ] b range: 38 39

[FFT-HF/Rho] Mesh size: 25 25 114

Last few lines of LOG file read so (First LOG file)
<05h-35m> P1-c13node15: [PARALLEL Self_Energy for QPs on 1 CPU] Loaded/Total (Percentual):2/2(100%)
<05h-35m> P1-c13node15: [PARALLEL Self_Energy for Q(ibz) on 4 CPU] Loaded/Total (Percentual):23/91(25%)
<05h-35m> P1-c13node15: [PARALLEL Self_Energy for G bands on 32 CPU] Loaded/Total (Percentual):2/39(5%)
<05h-35m> P1-c13node15: [PARALLEL distribution for Wave-Function states] Loaded/Total(Percentual):48/3549(1%)
<05h-35m> P1-c13node15: [FFT-HF/Rho] Mesh size: 25 25 114
I have attached the script file and input files.
Thank you in advance
input.txt
script.txt
You do not have the required permissions to view the files attached to this post.
Ponnappa K. P.
Phd Student
Harish Chadra Research Institute, India

User avatar
Daniele Varsano
Posts: 3975
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: Job teminated when considering higher KPOINTS

Post by Daniele Varsano » Mon Sep 09, 2024 9:16 am

Dear Ponnappa K. P.,

this is most probably a memory issue.
You can try to better distribute the memory among MPI process by setting:

Code: Select all

SE_CPU= "1 2 64"                       # [PARALLEL] CPUs for each role
SE_ROLEs= "q, qp, b"  
if it does not solve the problem you can run with less CPUs per node, always assigning most or all the CPUs to "b" role avoiding the "q" role.

Best,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Ponnappa
Posts: 4
Joined: Sat Sep 07, 2024 7:26 pm

Re: Job teminated when considering higher KPOINTS

Post by Ponnappa » Wed Sep 11, 2024 6:49 am

Dear Daniele,

Thank you for the response. I tried as per the suggestion but got an error which says (USER parallel structure does not fit the current run parameters. 64 CPU for 39 elements (ROLE is 'b')). When I reduce the b role to 32 I face memory issue.

However reducing EXXRLvcs and VXCLvcs to 0.95 times of its default value solved the problem. I hope reducing this by a small value is resonable. Once again Thank you for the response.

Regards,
Ponnappa K. P.
Ponnappa K. P.
Phd Student
Harish Chadra Research Institute, India

Post Reply