memory problem while running hf

Concerns issues with computing quasiparticle corrections to the DFT eigenvalues - i.e., the self-energy within the GW approximation (-g n), or considering the Hartree-Fock exchange only (-x)

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano

Post Reply
niloufar
Posts: 34
Joined: Thu Oct 08, 2020 3:53 pm

memory problem while running hf

Post by niloufar » Wed Aug 31, 2022 5:28 am

Hello dear developers,
I have a problem with hf calculation. I did it before for other systems and it was fine, now I have different crystal and I'm running it in a new computer with 48 Gb of RAM, and I have this in my setup

[05] Memory Overview
====================

Memory Usage: global (Only MASTER cpu here). [O] stands for group 'O'
Memory treshold are: 619.0520 [Mb] (basic treshold) 6.190520 [Gb] (SAVEs treshold)


Max memory used : 139.3540 [Mb]

and when I run the hf calculation, it crashes.


--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 2 with PID 0 on node uK4D8Q441-01 exited on signal 9 (Killed).
--------------------------------------------------------------------------


how can I solve this problem, and if it's the problem with memory, how much I need?
Niloufar Dezashibi
Physics and Energy Engineering Department
Amirkabir University of Technology
Tehran, Iran
niloufardezashib@aut.ac.ir
nilo.dezashibi@gmail.com

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: memory problem while running hf

Post by Daniele Varsano » Thu Sep 01, 2022 5:14 pm

Dear Nioufar,
in order to spot the problem, it will be useful to have a look at your input/report/log files.
You can attach the files in your post using the "Attachments" bottom below and eventually rename your files with an allowed suffix e.g. .txt, .zip etc.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

niloufar
Posts: 34
Joined: Thu Oct 08, 2020 3:53 pm

Re: memory problem while running hf

Post by niloufar » Sun Sep 04, 2022 7:10 am

Thanks for responding.
I attach the files, it would be nice of you, if you take a look at them. @>-
You do not have the required permissions to view the files attached to this post.
Niloufar Dezashibi
Physics and Energy Engineering Department
Amirkabir University of Technology
Tehran, Iran
niloufardezashib@aut.ac.ir
nilo.dezashibi@gmail.com

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: memory problem while running hf

Post by Daniele Varsano » Fri Sep 09, 2022 8:59 am

Dear Niloufar,

actually, it is not clear to me why your calculation is so memory intensive, anyway there are some inconsistencies.
I suggest you try to repeat your calculation by setting the proper parallel environment.
You are running on 4 CPUs and setting 24 MPI process on dipoles evaluation which is inconsistent.

Can you repeat your calculation using 24CPU and setting in your input file:

Code: Select all

SE_CPU= "  1 1  24"               # [PARALLEL] CPUs for each role
SE_ROLEs= "q qp b"               # [PARALLEL] CPUs roles (q,qp,b)

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Post Reply