Yambopy parallalization

Post here any question you encounter when running the scripts of the yambo-py suite. Post here problem strictly to the python interface as problem coming from the yambo runs should go in the appropriate subforum.

Moderators: palful, amolina, mbonacci

Post Reply
sitangshu
Posts: 175
Joined: Thu Jan 05, 2017 8:08 am

Yambopy parallalization

Post by sitangshu » Fri Mar 10, 2017 12:12 pm

Dear Sir,

I have an HPC with 40 procs and with almost 380GB RAM in a Master-Slave configuration (one master node and 2 compute nodes).
Now when I ran to compute a GW using PPApprox, I observed that even when I wrote the followings:

y['X_all_q_ROLEs'] = 'q.k.c.v'
y['X_all_q_CPU'] = '4.4.2.1'
y['SE_ROLEs'] = 'q.qp.b'
y['SE_CPU'] = '4.4.2'
y['X_all_q_nCPU_invert'] = '4'
y['X_all_q_nCPU_LinAlg_INV'] = '2'

the 32 CPUs are not used. However, the program runs without any errors and the report-file shows :
* CPU : 1
* THREADS (max): 1
* THREADS TOT(max): 1
* I/O NODES : 1
* Fragmented WFs :yes

CORE databases in .
Additional I/O in .
Communications in reference
Input file is reference.in
Report file is reference/r-reference_em1d_ppa_HF_and_locXC_gw0_rim_cut
Job string(main): reference
Log files in reference/LOG


Am I missing anything to write additionally? I am also wondering, if this scheme of CPU division is an optimized parallelism! :roll:
Should I be writing something in this line:
os.system('cd gw_conv; %s -F %s -J %s -C %s 2> %s.log'%(yambo,filename,folder,folder,folder))

With regards,
Sitangshu
Sitangshu Bhattacharya
Indian Institute of Information Technology-Allahabad
India
Web-page: http://profile.iiita.ac.in/sitangshu/
Institute: http://www.iiita.ac.in/

miranda.henrique
Posts: 16
Joined: Thu Jul 23, 2015 2:34 pm

Re: Yambopy parallalization

Post by miranda.henrique » Mon Mar 13, 2017 2:30 pm

Dear Sitangshu,

The code you are using will run yambo in serial only.
To run it in parallel you have to change the command to:

os.system('cd gw_conv; mpirun -np %d %s -F %s -J %s -C %s 2> %s.log'%(totprocs,yambo,filename,folder,folder,folder))

where totprocs is the total number of cpus that you will be using in your job.

Kind Regards,
Henrique
Henrique Pereira Coutada Miranda
Institute of Condensed Matter and Nanosciences
http://henriquemiranda.github.io/
UNIVERSITÉ CATHOLIQUE DE LOUVAIN

sitangshu
Posts: 175
Joined: Thu Jan 05, 2017 8:08 am

Re: Yambopy parallalization

Post by sitangshu » Tue Mar 14, 2017 7:24 am

Thanks you Henrique,

Its worked smoothly. ;)
Will this work for electron- phonon calculations using pw? :roll: I am wondering where should I put this?

Regards,
Sitangshu
Sitangshu Bhattacharya
Indian Institute of Information Technology-Allahabad
India
Web-page: http://profile.iiita.ac.in/sitangshu/
Institute: http://www.iiita.ac.in/

miranda.henrique
Posts: 16
Joined: Thu Jul 23, 2015 2:34 pm

Re: Yambopy parallalization

Post by miranda.henrique » Tue Mar 14, 2017 2:30 pm

Dear Sitangshu,

To apply the same parallelization to QE you have to also add the mpirun in the os.system line.
In the examples we provided the execution is serial (since in principle we don't need mpirun).
However, to modify then is just a matter of adding the mpirun command.

Kind Regards,
Henrique
Henrique Pereira Coutada Miranda
Institute of Condensed Matter and Nanosciences
http://henriquemiranda.github.io/
UNIVERSITÉ CATHOLIQUE DE LOUVAIN

sitangshu
Posts: 175
Joined: Thu Jan 05, 2017 8:08 am

Re: Yambopy parallalization

Post by sitangshu » Tue Mar 21, 2017 8:31 am

Dear Sir,

Thank you for your previous comment. I am getting the parallel GW working now.
However, I am trying to make the electron-phonon calculation to go parallel also but, it seems the parallelization is failing.
I have tried to insert the same mpirun command in the potential dVscf and e-p matrix elements line, but the paralleism is not working...
Can you please mention this..

Regards,
Sitangshu
Sitangshu Bhattacharya
Indian Institute of Information Technology-Allahabad
India
Web-page: http://profile.iiita.ac.in/sitangshu/
Institute: http://www.iiita.ac.in/

Post Reply