Page 1 of 1

memory problem while running hf

Posted: Wed Aug 31, 2022 5:28 am
by niloufar
Hello dear developers,
I have a problem with hf calculation. I did it before for other systems and it was fine, now I have different crystal and I'm running it in a new computer with 48 Gb of RAM, and I have this in my setup

[05] Memory Overview
====================

Memory Usage: global (Only MASTER cpu here). [O] stands for group 'O'
Memory treshold are: 619.0520 [Mb] (basic treshold) 6.190520 [Gb] (SAVEs treshold)


Max memory used : 139.3540 [Mb]

and when I run the hf calculation, it crashes.


--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 2 with PID 0 on node uK4D8Q441-01 exited on signal 9 (Killed).
--------------------------------------------------------------------------


how can I solve this problem, and if it's the problem with memory, how much I need?

Re: memory problem while running hf

Posted: Thu Sep 01, 2022 5:14 pm
by Daniele Varsano
Dear Nioufar,
in order to spot the problem, it will be useful to have a look at your input/report/log files.
You can attach the files in your post using the "Attachments" bottom below and eventually rename your files with an allowed suffix e.g. .txt, .zip etc.

Best,
Daniele

Re: memory problem while running hf

Posted: Sun Sep 04, 2022 7:10 am
by niloufar
Thanks for responding.
I attach the files, it would be nice of you, if you take a look at them. @>-

Re: memory problem while running hf

Posted: Fri Sep 09, 2022 8:59 am
by Daniele Varsano
Dear Niloufar,

actually, it is not clear to me why your calculation is so memory intensive, anyway there are some inconsistencies.
I suggest you try to repeat your calculation by setting the proper parallel environment.
You are running on 4 CPUs and setting 24 MPI process on dipoles evaluation which is inconsistent.

Can you repeat your calculation using 24CPU and setting in your input file:

Code: Select all

SE_CPU= "  1 1  24"               # [PARALLEL] CPUs for each role
SE_ROLEs= "q qp b"               # [PARALLEL] CPUs roles (q,qp,b)

Best,
Daniele