Dear Yambo Developers and Users,
I recently installed yambo 5.2.3. The configration I used was (./configure FC=ifort CC=icc MPIFC=mpiifort --enable-memory-profile). The installtion is complete but when I try to use more than 1 core I face memory issue and do not start any calculation (The calculation can be started with only 1 core ). I have attached the error file. Kindly help to resolve the issue. Thank you in advance
Memory issue when using more than 1 core
Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan, Nicola Spallanzani
-
- Posts: 12
- Joined: Sat Sep 07, 2024 7:26 pm
Memory issue when using more than 1 core
You do not have the required permissions to view the files attached to this post.
Ponnappa K. P.
Phd Student
Harish Chadra Research Institute, India
Phd Student
Harish Chadra Research Institute, India
- Daniele Varsano
- Posts: 4060
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: Memory issue when using more than 1 core
Dear Ponnappa,
it could there are some problems with your MPI compilation, can you please provide some more information, e.g. report and log files if any?
If not, can you try to run the code interactively?
> mpirun -np 2 $path/yambo
Also, having a look at the config.log file can be useful,
Best,
Daniele
it could there are some problems with your MPI compilation, can you please provide some more information, e.g. report and log files if any?
If not, can you try to run the code interactively?
> mpirun -np 2 $path/yambo
Also, having a look at the config.log file can be useful,
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 12
- Joined: Sat Sep 07, 2024 7:26 pm
Re: Memory issue when using more than 1 core
Dear Daniele,
THe cluster which I am has 24 cores per per node. Looking at your suggestion I used used cores which was serving the purpose and later gradually increased. I could find that I shows the segmentation fault only if I use complete 24 cores. I can even run the calcuation with n*22 cores but not with n*24 cores.
Currently I used 8*22 cores the calcualtion started but ended with some other memory or Parallelization issue. I am attaching the input and log file of the same. Can I get more insights how to we solve the memory or paralleliztion issue in general. Thank you for the reply.
Regards,
Ponnappa
THe cluster which I am has 24 cores per per node. Looking at your suggestion I used used cores which was serving the purpose and later gradually increased. I could find that I shows the segmentation fault only if I use complete 24 cores. I can even run the calcuation with n*22 cores but not with n*24 cores.
Currently I used 8*22 cores the calcualtion started but ended with some other memory or Parallelization issue. I am attaching the input and log file of the same. Can I get more insights how to we solve the memory or paralleliztion issue in general. Thank you for the reply.
Regards,
Ponnappa
You do not have the required permissions to view the files attached to this post.
Ponnappa K. P.
Phd Student
Harish Chadra Research Institute, India
Phd Student
Harish Chadra Research Institute, India
- Daniele Varsano
- Posts: 4060
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: Memory issue when using more than 1 core
Dear Ponnappa,
most probably using all the core in the node, you are filling all the available memory in the node.
In order to distribute memory try to parallelize on bands as much as possible as you did in X_and_IO_CPU.
You can fine-tuning the distribution moving cpus from "v" to "c", but as I can't have a look at the report file I do not know how many bands are occupied, in any
case try to balance the cpus according to the number of occupied and empty states.
Regarding the Self energy, avoid assigning cpu on "q" and assing them on "b" as much as possible e.g.
If it fails, the strategy is to use less cpus per node.
Best,
Daniele
most probably using all the core in the node, you are filling all the available memory in the node.
In order to distribute memory try to parallelize on bands as much as possible as you did in X_and_IO_CPU.
You can fine-tuning the distribution moving cpus from "v" to "c", but as I can't have a look at the report file I do not know how many bands are occupied, in any
case try to balance the cpus according to the number of occupied and empty states.
Regarding the Self energy, avoid assigning cpu on "q" and assing them on "b" as much as possible e.g.
Code: Select all
SE_CPU= "1 2 88"
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/