YAMBO_parallel

Concerns issues with computing quasiparticle corrections to the DFT eigenvalues - i.e., the self-energy within the GW approximation (-g n), or considering the Hartree-Fock exchange only (-x)

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano

Post Reply
ranber14
Posts: 3
Joined: Sun Sep 04, 2011 2:39 pm
Location: stuttgart

YAMBO_parallel

Post by ranber14 » Tue Sep 06, 2011 8:32 am

Dear Yambo developers,

I am new user of YAMBO. I am trying some calculations for an infinite graphene sheet. I am running it in parallel using mpirun
'mpirun -machinefile machinefile -np $num_tasks ~/bin/yambo '

It runs well when number of k-points is small, but when I increase the size of k-points sush as 128x128x1 it crashes without any clear message.

<---> [01] Files & I/O Directories
<---> [02] CORE Variables Setup
<---> [02.01] Unit cells
<---> [02.02] Symmetries
<---> [02.03] RL shells
<---> [02.04] K-grid lattice
<---> [02.05] Energies [ev] & Occupations
<---> [03] Transferred momenta grid
<---> [M 0.396 Gb] Alloc qindx_X qindx_S (0.375)
<---> [04] Bare local and non-local Exchange-Correlation
<---> [M 2.671 Gb] Alloc WF (2.224)
<---> [FFT-HF/Rho] Mesh size: 15 15 72


Best Regards,
Ranber
Dr. Ranber Singh
Postdoc, MPI for solid state physics, Heisenbergstr 1, Stuttgart, GErmany

User avatar
Daniele Varsano
Posts: 3824
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: YAMBO_parallel

Post by Daniele Varsano » Tue Sep 06, 2011 9:04 am

Dear Ranber,
what kind of calculation are you doing? and on what kind of machine?
Anyway most probably this is a memory issue, you have 2.6Gb
allocated before the crush, next it is possible you have to allocate
more memory your machine have. Try to estimate the memory you need,
for instance looking at the output of the small sampling.
If this is the problem and yo need such a big sampling you have to resort
to a bigger machine (in terms of RAM), depending on the kind of calculation
you are performing, in parts of the code the memory is distributed, so may you can
also solve the problem by increasing the number of processors.

Cheers,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

ranber14
Posts: 3
Joined: Sun Sep 04, 2011 2:39 pm
Location: stuttgart

Re: YAMBO_parallel

Post by ranber14 » Thu Sep 08, 2011 9:43 am

Dear Daniele Varsano

Thanks you for the reply.
I am actually tring to calculate the absorption spectra of graphene.
I am running on linux clusters. I increases the no. processor but still it crash.
Do you think it can be problem with 'mpirun'.

I also want to ask you one more thing.
If I calculate the absoprtion spectra using BSE with YAMBO over 32x32x1 k points.
Is it possible with YAMBO to interplote data over 128x128x1 k points.

Best Regards,
Ranber
Dr. Ranber Singh
Postdoc, MPI for solid state physics, Heisenbergstr 1, Stuttgart, GErmany

User avatar
Daniele Varsano
Posts: 3824
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: YAMBO_parallel

Post by Daniele Varsano » Thu Sep 08, 2011 10:39 am

Dear Ranber,
if you are doing Bethe-Salpeter, unfortunately you cannot distribute memory ...for the moment.
You have a very large k-point grid, so your problem I suspect is related to memory issue.
Are you sure you need this large grid to converge the Bethe-Salpeter calculation? And how many bands
are you taking iunto account: the matrix size is given by NkxNcxNv (k-points times conduction bands time valence bands).
Try to check your convergences and look if you really need that number of k-points and bands. You can also
reduce the memory needed lowering the FFTGvces. There is not a way to interpolate for the spectra anyway
once the spectrum is converged I would say that it is just question of broadening.

Hope it helps,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

ranber14
Posts: 3
Joined: Sun Sep 04, 2011 2:39 pm
Location: stuttgart

Re: YAMBO_parallel

Post by ranber14 » Tue Sep 13, 2011 8:13 am

Dear Daniele,

Thanks for the reply. I don't know actully I need this much grid or not.
I take 12 bands, 4 valance bands and 8 conduction bands.
If I calculate RPA optical spectra with at least 128x128x1 k points only then it get well converrged.
So I was trying same with BSE but it crash, as you say it is probably memory problem.
I will try with lowering the FFTGvces.


Best,
Ranber
Dr. Ranber Singh
Postdoc, MPI for solid state physics, Heisenbergstr 1, Stuttgart, GErmany

Post Reply