Page 1 of 1
elphondb.py 'HEAD_KPT' error!
Posted: Fri Aug 16, 2024 12:40 pm
by sitangshu
Hi yamboers,
I wanted to plot the electron-phonon matrix element over the BZ using yambopy. My python version is 3.10.14. The yambo version used for this calculation was from:
https://github.com/attacc/yambo.git yambo-excph.
My SAVE folder contains: ndb.elph_gkkp_expanded* (from 1-324), ndb.PH_Double_Grid, ns.nlcc_pp_pwscf, ns.db1, ndb.gops, ndb.kindx, ns.kb_pp_pwscf*, ns.wf and ns.wf_fragments*
While running the python elph_plot.py, I get this error:
(yambopy) sitangshubhattacharya@Sitangshus-iMac databases_yambopy % python elph_plot.py
37 kpoints expanded to 324
Traceback (most recent call last):
File "/Users/sitangshubhattacharya/yambopy/tutorial/databases_yambopy/elph_plot.py", line 34, in <module>
yelph = YamboElectronPhononDB(ylat,folder_gkkp=save_path+'/SAVE',save=save_path+'/SAVE')
File "/opt/anaconda3/envs/yambopy/lib/python3.10/site-packages/yambopy/dbs/elphondb.py", line 97, in __init__
self.ibz_kpoints_elph = database.variables['HEAD_KPT'][:].T
KeyError: 'HEAD_KPT'
I have tested that the gkkp* files are not bad. I can use them (unexpanded) to obtain all QP energies at variety of temperatures.
I used these parameters in the elph_plot.py script:
i_n, i_m = [25,26] #i_n =25 -> valence band, i_n = 26 -> conduction band
i_nu = 3 # LA phonon mode at K (ZO mode at Gamma); LO, TO modes are i_nu=4,5
i_q = 323 # This is the K-point in the hexagonal BZ
i_k = 323
I tried with the hBN database, but it did not give me this error. Is it related to yambo version? I realized that the hBN database was done on a different version. It does not have ns.nlcc_pp_pwscf file.
A little advise will be very much appreciated.
Regards,
Sitangshu
Re: elphondb.py 'HEAD_KPT' error!
Posted: Mon Sep 02, 2024 10:55 am
by palful
Dear Sitangshu,
This is strange. I have checked with a recent yambo version and the 'HEAD_KPT' variable seems to be there in the expanded databases. Are you reading the expanded "ndb.elph_gkkp_expanded*" files or the unexpanded matrix elements?
If you attach the head database (the one without *_fragment* in the name) and the ns.db1 file from the SAVE directory, I could investigate further. The presence or absence of other databases beyond ns.db1 in the SAVE does not affect this part of yambopy.
Best,
Fulvio
Re: elphondb.py 'HEAD_KPT' error!
Posted: Fri Sep 06, 2024 8:04 pm
by sitangshu
Thank you Fulvio for your response. I have the I checked this and found that I need to comment gkkp_db in the input. By doing this, I can plot the matrix elements.
Regards,
Sitangshu
Re: elphondb.py 'HEAD_KPT' error!
Posted: Wed Jun 11, 2025 10:54 am
by csk
Hi!
I am facing the same problem and commenting out 'gkkp_db' in the ypp_ph input file does not work for me. Do you have any suggestions?
The yambo version I use is 5.2.3 as compiled on the Leonardo cluster. I also attach the db files 'ndb.elph_gkkp_expanded', as well as the input, output and report files from ypp_ph (remove '.txt' which I added for the forum upload).
What I find strange is that in the output file of ypp_ph, 77 k-points are detected, while there are 27 in the report file (which is correct for the IBZ)...
Thanks for your help!
Christian
Re: elphondb.py 'HEAD_KPT' error!
Posted: Wed Jun 11, 2025 2:42 pm
by csk
Update: the error can also be reproduced with the electron-phonon tutorial on bulk Si (
https://wiki.yambo-code.eu/wiki/index.p ... n_coupling).
If I un-comment #GkkpExpand in the last step and try to read the DB with the following python code (in the folder dvscf/si.save), I get the same error.
Code: Select all
from yambopy import *
ylat = YamboLatticeDB.from_db_file(filename="SAVE/ns.db1")
yelph = YamboElectronPhononDB(ylat,folder_gkkp='SAVE',save='SAVE')
Greetings,
Christian
Re: elphondb.py 'HEAD_KPT' error!
Posted: Mon Jun 16, 2025 6:06 pm
by palful
Dear Christian and Sitangshu,
You are right. Somehow, the variable "HEAD_KPT" (containing the k-point coordinates in the irreducible BZ in yambo units) stopped being printed in the el-ph database, but only in the expanded case.
I assume this was a "sneaky" change introduced at some point in the new yambo versions, as indeed that variable was not very useful.
I have patched yambopy to handle this, you can find the updated version on the github repository.
Thank you for spotting this.
Let me also mention that we are completely revamping the way electron-phonon coupling matrix elements are calculated, by making use of the new auxiliary "LetzElPhC" code (you can find it on the yambo github page as well). There has not been any "official" release -- and no update in the wiki so far, it will come at a certain point -- but it can be used. This makes a large part of what you see in the current wiki tutorial obsolete.
Best,
Fulvio
Re: elphondb.py 'HEAD_KPT' error!
Posted: Mon Jan 26, 2026 1:52 pm
by abbas
please sir help me how to solve this issue ..
/elph/MoS2.save$ yambopy l2y -ph /home/hongchang/elph/MoS2.dvscf -b 15 22 -par 5 5 --lelphc /opt/software/LetzElphC-main/src/lelphc
:: LetzElPhC pre-processing ::
:: LetzElPhC el-ph calculation ($> tail -f lelphc.log) ::
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
*************************************
# [ Error !!!] : File : common/parallel.c, in function : create_parallel_comms at line : 106
Error msg : product of kpools and qpools must divide total cpus.
*************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Error: Command 'mpirun -np 25 /opt/software/LetzElphC-main/src/lelphc -F lelphc.in' returned non-zero exit status 1.
Description: None
:: Load el-ph database ::
Traceback (most recent call last):
File "/opt/software/anaconda3/lib/python3.13/site-packages/yambopy/letzelphc_interface/lelphcdb.py", line 37, in __init__
try: database = Dataset(filename)
~~~~~~~^^^^^^^^^^
File "src/netCDF4/_netCDF4.pyx", line 2517, in netCDF4._netCDF4.Dataset.__init__
File "src/netCDF4/_netCDF4.pyx", line 2154, in netCDF4._netCDF4._ensure_nc_success
FileNotFoundError: [Errno 2] No such file or directory: 'ndb.elph'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/software/anaconda3/bin/yambopy", line 5, in <module>
from yambocommandline.scripts.yambopy import YambopyCmd
File "/opt/software/anaconda3/lib/python3.13/site-packages/yambocommandline/scripts/yambopy.py", line 766, in <module>
ycmd = YambopyCmd(*sys.argv)
File "/opt/software/anaconda3/lib/python3.13/site-packages/yambocommandline/scripts/yambopy.py", line 758, in __init__
self.cmd = cmdclass(args[2:])
~~~~~~~~^^^^^^^^^^
File "/opt/software/anaconda3/lib/python3.13/site-packages/yambocommandline/scripts/yambopy.py", line 671, in __init__
lelph_interface.letzelph_to_yambo()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/software/anaconda3/lib/python3.13/site-packages/yambocommandline/commands/lelph_interface.py", line 134, in letzelph_to_yambo
lelph_obj = LetzElphElectronPhononDB('ndb.elph',div_by_energies=False)
File "/opt/software/anaconda3/lib/python3.13/site-packages/yambopy/letzelphc_interface/lelphcdb.py", line 38, in __init__
except: raise FileNotFoundError("error opening %s in LetzElphElectronPhononDB"%filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: error opening ndb.elph in LetzElphElectronPhononDB
Abbas
Master Student
IIUI Isalamabad pakistan