Not enough states to converge the Fermi Level

Concerns any physical issues arising during the setup step (-i option). This includes problems with symmetries, k/q-point sets, and so on. For technical problems (running in parallel, etc), refer to the Technical forum.

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, Daniele Varsano

clin
Posts: 9
Joined: Fri Sep 17, 2021 10:27 am

Not enough states to converge the Fermi Level

Post by clin » Fri Jan 12, 2024 4:50 am

Dear developers,

I met the error "Not enough states to converge the Fermi Level" during the initialization of Yambo database. My system is copper so it's a metallic case. The initialization with 600 bands from NSCF of QE was successful, but it failed when there are 1000 bands. The k-grid is 12x12x12.

The detailed output is as the following:

Code: Select all

 <---> [01] MPI/OPENMP structure, Files & I/O Directories
 <---> MPI Cores-Threads   : 1(CPU)-256(threads)
 <---> [02] CORE Variables Setup
 <---> [02.01] Unit cells
 <---> [02.02] Symmetries
 <---> [02.03] Reciprocal space
 <---> Shells finder |########################################| [100%] --(E) --(X)
 <---> [02.04] K-grid lattice
 <---> Grid dimensions      :  12  12  12
 <---> [02.05] Energies & Occupations
[ERROR] STOP signal received while in[02.05] Energies & Occupations
[ERROR] Not enough states to converge the Fermi Level
The corresponding r_setup is also attached.
r_setup.txt
Is there anything I can do to avoid this error?

I have another question regarding to the electron temperature setting ElecTemp in Yambo. In the case of metals, according to the previous posts, I find Yambo by default will always set it to 0.025852 eV, i.e. 300 K. Should I keep this default setting or set it to 0 K? Or should I set it to a value corresponding to the exact smearing value used in QE? For example, I used mv smearing with degauss=0.01 Ry. Should I set it to 1579 K which is kind of large value (i.e. 0.01 Ry)? How does this ElecTemp affect the screening and GW calculations for metals?

Thanks,
Changpeng
You do not have the required permissions to view the files attached to this post.
Changpeng Lin
Doctoral Assistant, EPFL

User avatar
Davide Sangalli
Posts: 614
Joined: Tue May 29, 2012 4:49 pm
Location: Via Salaria Km 29.3, CP 10, 00016, Monterotondo Stazione, Italy
Contact:

Re: Not enough states to converge the Fermi Level

Post by Davide Sangalli » Fri Jan 12, 2024 9:20 am

Dear Changpeng,
for the error with the Fermi level I'm not sure about what is happening.
You can try to comment the line generating the error and recompile (look for the error message in this file src/common/OCCUPATIONS_Fermi.F)

Can you also attach your DFT input files?
I'll try to reproduce the behavior.

For the temperature, there is no unique answer. The temperature should be used mostly as a parameter to speed-up the convergence w.r.t the kpt-sampling.
The GW theory implemented in yambo (and in most other codes) is based on the zero temperature formalis.
Accordingly, the exact result should be the one with zero temperature in input. However the kpt-sampling is very hard to converge without a temperature, and in practice you set a small value in input.

Best,
D.
Davide Sangalli, PhD
CNR-ISM, Division of Ultrafast Processes in Materials (FLASHit) and MaX Centre
https://sites.google.com/view/davidesangalli
http://www.max-centre.eu/

clin
Posts: 9
Joined: Fri Sep 17, 2021 10:27 am

Re: Not enough states to converge the Fermi Level

Post by clin » Fri Jan 12, 2024 10:52 am

Hi Davide,

Thanks a lot for your explanation. I've attached my pwscf inputs and also the pseudo just in case you want to perform a similar calculation. Meanwhile, I will test the case when the line generating that error is commented.

Re the tempeature, I ran a few GW calculations for copper with ElecTemp set to 0. In the report file, Yambo generated a tiny number for it, 0.7 K. Since it's just a small parameter to speed up the convergence w.r.t. k-grid, if there is no error, I can trust my results, right? Or, is it safer to keep the default (300 K)?

Additional info maybe not relevant: in my SCF input, I set the convergence threshold to a very very small number, i.e. conv_thr=1E-20, because only in this way the NSCF with 1000 bands is successful. Otherwise, e.g. when conv_thr=1E-14 for SCF, it would throw out an error in the subroutine cdiaghg: "S matrix not positive definite" or "problems computing cholesky". It seems to imply the Hamiltonian is singular or the orthogonal solution cannot be found.

Thanks,
Changpeng
You do not have the required permissions to view the files attached to this post.
Changpeng Lin
Doctoral Assistant, EPFL

User avatar
Davide Sangalli
Posts: 614
Joined: Tue May 29, 2012 4:49 pm
Location: Via Salaria Km 29.3, CP 10, 00016, Monterotondo Stazione, Italy
Contact:

Re: Not enough states to converge the Fermi Level

Post by Davide Sangalli » Fri Jan 12, 2024 12:08 pm

Re the tempeature, I ran a few GW calculations for copper with ElecTemp set to 0. In the report file, Yambo generated a tiny number for it, 0.7 K. Since it's just a small parameter to speed up the convergence w.r.t. k-grid, if there is no error, I can trust my results, right? Or, is it safer to keep the default (300 K)?
You can compare how much the results change. Yeah, 0.7 K might be too small. Probably 300 K is safer.
What I would expect (never tried). Reducing the value from an high number (say 1000 K) the results should change smoothly up to a point where there is a jump in the value.
This might happen when the temperature becomes too low w.r.t. the k-points grid.

Best,
D.
Davide Sangalli, PhD
CNR-ISM, Division of Ultrafast Processes in Materials (FLASHit) and MaX Centre
https://sites.google.com/view/davidesangalli
http://www.max-centre.eu/

clin
Posts: 9
Joined: Fri Sep 17, 2021 10:27 am

Re: Not enough states to converge the Fermi Level

Post by clin » Fri Jan 12, 2024 12:19 pm

Hi Davide,

Thanks. An update: after the error generating line commented out, the initialization will never complete. It stuck in the section "[02.05] Energies & Occupations" like a loop that never ends. At least, I waited the code for 1 hour, but it never exits.

Changpeng
Changpeng Lin
Doctoral Assistant, EPFL

User avatar
Davide Sangalli
Posts: 614
Joined: Tue May 29, 2012 4:49 pm
Location: Via Salaria Km 29.3, CP 10, 00016, Monterotondo Stazione, Italy
Contact:

Re: Not enough states to converge the Fermi Level

Post by Davide Sangalli » Fri Jan 19, 2024 5:10 pm

Dear Changpeng,
in the master branch we released some upgrades to the subroutine which computes the electronic occupations

Code: Select all

src/common/OCCUPATIONS_Fermi.F
Can you try to compile the master and see if the initialization works with it?

Best,
D.
Davide Sangalli, PhD
CNR-ISM, Division of Ultrafast Processes in Materials (FLASHit) and MaX Centre
https://sites.google.com/view/davidesangalli
http://www.max-centre.eu/

clin
Posts: 9
Joined: Fri Sep 17, 2021 10:27 am

Re: Not enough states to converge the Fermi Level

Post by clin » Fri Jan 26, 2024 8:14 am

Dear Davide,

Sorry for the late reply as I didn't frequently check the forum. I just ran the initialization with the Yambo compiled from the master branch. Unfortounately, the same error occurred again "not enough states to converge the Fermi level". I attached the src/common/OCCUPATIONS_Fermi.F file here so that you can check if I used the correct modified version. I can also share a google drive link to the SAVE database generated by p2y if it is helpful to debug.

I have a few other questions of using Yambo:
1) In some cases, the initialization has the following warnings:

Code: Select all

<---> [WARNING] Re-defining variable Qpts in file ./SAVE//ndb.kindx
<---> [WARNING] Re-defining variable Qindx in file ./SAVE//ndb.kindx
<---> [WARNING] Re-defining variable Sindx in file ./SAVE//ndb.kindx
Why would there be this warning? Will it actually affect any Yambo calculation?

2) Sometimes the value of NGsBlkX in the end of report file is different from the one I set in the input. For example, I set 50 Ry then it becomes 53 Ry; if I set 10 Ry, it is still 10 Ry in the end of report file. Is it because Yambo needs to close the G-shell sometimes? I don't think the set energy cutoff will happen to be any unclosed G shell.

3) In which case will there be a large difference in the results using single and double precision Yambo? Is it alwasy recommended to use double precision Yambo?

4) For BSE calculations, the kernel has the unit of energy Hartree, right? What I don't understand is that there is a factor of 1/(\Omega N_q) in the expression [see Eqs. 19 and 20 of the 2009 Yambo code paper]. I did a derivation of the kernel from scratch. To me it seems the factor comes from the fact that Coulomb potential is normalized in the whole real space, not just a unit cell. By peforming BSE calculations on different k/q-grid, I found the values of BSE matrix elements (that exist on both small and large grids) actually become smaller and smaller if I used a larger k/q-grid. As the dielectric matrix are already converged in my calculations, their ratio happened to be N_q_small_gird / N_q_large_grid. I want to use the screened kernel to calculate the Coulomb scattering rate by Fermi's golden rule. If the BSE matrix elements are normalized by grid, this means the strength of electron-hole interaction will depend on the grid size. Could you give a comment on this?
I also checked the expressions of BSE kernel in BerkeleyGW and Abinit code. I found there is no such a factor of 1/(\Omega N_q) in BerkeleyGW expression, Eqs (37) and (38) of their paper:
https://linkinghub.elsevier.com/retriev ... 5511003912
For Abinit, it has the same expression as Yambo does: see https://docs.abinit.org/theory/bse/#4-k ... ocal-space

Many thanks,
Changpeng
You do not have the required permissions to view the files attached to this post.
Changpeng Lin
Doctoral Assistant, EPFL

User avatar
Davide Sangalli
Posts: 614
Joined: Tue May 29, 2012 4:49 pm
Location: Via Salaria Km 29.3, CP 10, 00016, Monterotondo Stazione, Italy
Contact:

Re: Not enough states to converge the Fermi Level

Post by Davide Sangalli » Wed Jan 31, 2024 12:23 pm

Dear Changpeng,
I've generated the yambo SAVE forlder with your input file and the setup worked smoothly

Code: Select all

 ___ __  _____  __ __  _____   _____
|   Y  ||  _  ||  Y  ||  _  \ |  _  |
|   |  ||. |  ||.    ||. |  / |. |  |
 \_  _/ |. _  ||.\_/ ||. _  \ |. |  |
  |: |  |: |  ||: |  ||: |   \|: |  |
  |::|  |:.|:.||:.|:.||::.   /|::.  |
  `--"  `-- --"`-- --"`-----" `-----"


 <---> [01] MPI/OPENMP structure, Files & I/O Directories
 <---> MPI Cores-Threads   : 1(CPU)-32(threads)
 <---> [02] CORE Variables Setup
 <---> [02.01] Unit cells
 <---> [02.02] Symmetries
 <---> [02.03] Reciprocal space
 <---> Shells finder |########################################| [100%] --(E) --(X)
 <---> [02.04] K-grid lattice
 <---> Grid dimensions      :  12  12  12
 <---> [02.05] Energies & Occupations
 <---> [WARNING] [X] Metallic system
 <---> [03] Transferred momenta grid and indexing
 <---> BZ -> IBZ reduction |########################################| [100%] --(E) --(X)
 <---> [03.01] X indexes
 <---> X [eval] |########################################| [100%] --(E) --(X)
 <---> X[REDUX] |########################################| [100%] --(E) --(X)
 <---> [03.01.01] Sigma indexes
 <---> Sigma [eval] |########################################| [100%] --(E) --(X)
 <---> Sigma[REDUX] |########################################| [100%] --(E) --(X)
 <---> [04] Timing Overview
 <---> [05] Memory Overview
 <---> [06] Game Over & Game summary
and in the r_setup

Code: Select all

  [02.05] Energies & Occupations
  ==============================

  [X] === General ===
  [X] Electronic Temperature                        :  0.258606E-1   300.100    [eV K]
  [X] Bosonic    Temperature                        :  0.258606E-1   300.100    [eV K]
  [X] Finite Temperature mode                       : yes
  [X] El. density                                   :  0.15600E+25 [cm-3]
  [X] Fermi Level                                   :  16.63943 [eV]

  [X] === Gaps and Widths ===
  [X] Conduction Band Min                           :  16.63943 [eV]
  [X] Valence Band Max                              :  16.63943 [eV]
  [X] Filled Bands                                  :   9
  [X] Metallic Bands                                :  10  10
  [X] Empty Bands                                   :    11  1000

  [X] === Metallic Characters ===
  [X] N of el / N of met el                         :  19.00000   1.14736
  [X] Average metallic occ.                         :  0.286839
So the code looks fine with me. I suspect there is something going on in your case.
One point I see is that you are running with 1 MPI and 256 threads. See this line in your log

Code: Select all

<---> MPI Cores-Threads   : 1(CPU)-256(threads)
In general it is not a good idea to use so many threads, and it may slow down significantly the simulations.
In this case it might cause the issue you are experiencing. The suggestion is to set OMP_NUM_THREADS=1 in your shell environment and re-run the code.
This time you should see

Code: Select all

<---> MPI Cores-Threads   : 1(CPU)-1(threads)
I'll reply later to the other questions.

D.
Davide Sangalli, PhD
CNR-ISM, Division of Ultrafast Processes in Materials (FLASHit) and MaX Centre
https://sites.google.com/view/davidesangalli
http://www.max-centre.eu/

User avatar
Davide Sangalli
Posts: 614
Joined: Tue May 29, 2012 4:49 pm
Location: Via Salaria Km 29.3, CP 10, 00016, Monterotondo Stazione, Italy
Contact:

Re: Not enough states to converge the Fermi Level

Post by Davide Sangalli » Wed Jan 31, 2024 2:05 pm

Other points:
1) In some cases, the initialization has the following warnings:
Code: Select all
<---> [WARNING] Re-defining variable Qpts in file ./SAVE//ndb.kindx
<---> [WARNING] Re-defining variable Qindx in file ./SAVE//ndb.kindx
<---> [WARNING] Re-defining variable Sindx in file ./SAVE//ndb.kindx
Why would there be this warning? Will it actually affect any Yambo calculation?
No, you can ignore this
2) Sometimes the value of NGsBlkX in the end of report file is different from the one I set in the input. For example, I set 50 Ry then it becomes 53 Ry; if I set 10 Ry, it is still 10 Ry in the end of report file. Is it because Yambo needs to close the G-shell sometimes? I don't think the set energy cutoff will happen to be any unclosed G shell.
Yeah, the rounding is because of that. The rounding is rather large if you set the input in Ry. Just set it in mRy, e.g. 50000 mRy, and it will be smaller.
3) In which case will there be a large difference in the results using single and double precision Yambo? Is it alwasy recommended to use double precision Yambo?
Double precision results are more reliable, however the code uses twice as much memory, and also takes more time.
So the suggestion would be: use double precision, but it you have memory issue, switch to the single precision version.
Addendum. For numerical reasons, the single precision simulations using many MPI tasks are more reliable and closer to the double precision results compared to the simulations in serial.
4) For BSE calculations, the kernel has the unit of energy Hartree, right? What I don't understand is that there is a factor of 1/(\Omega N_q) in the expression [see Eqs. 19 and 20 of the 2009 Yambo code paper]. I did a derivation of the kernel from scratch. To me it seems the factor comes from the fact that Coulomb potential is normaelized in the whole real space, not just a unit cell. By peforming BSE calculations on different k/q-grid, I found the values of BSE matrix elements (that exist on both small and large grids) actually become smaller and smaller if I used a larger k/q-grid. As the dielectric matrix are already converged in my calculations, their ratio happened to be N_q_small_gird / N_q_large_grid. I want to use the screened kernel to calculate the Coulomb scattering rate by Fermi's golden rule. If the BSE matrix elements are normalized by grid, this means the strength of electron-hole interaction will depend on the grid size. Could you give a comment on this?
I also checked the expressions of BSE kernel in BerkeleyGW and Abinit code. I found there is no such a factor of 1/(\Omega N_q) in BerkeleyGW expression, Eqs (37) and (38) of their paper:
https://linkinghub.elsevier.com/retriev ... 5511003912
For Abinit, it has the same expression as Yambo does: see https://docs.abinit.org/theory/bse/#4-k ... ocal-space
It is correct. The single matrix element of the kernel becomes smaller and smaller as the k grid size increases.
This happens because what matters is somehow the "k-integral" of the matrix elements. There is no "renormalization" of the Coulomb interaction in the whole space, the pre-factor there is a d3k factor in a discrete integral in k space.

Best,
D.
Davide Sangalli, PhD
CNR-ISM, Division of Ultrafast Processes in Materials (FLASHit) and MaX Centre
https://sites.google.com/view/davidesangalli
http://www.max-centre.eu/

clin
Posts: 9
Joined: Fri Sep 17, 2021 10:27 am

Re: Not enough states to converge the Fermi Level

Post by clin » Thu Feb 01, 2024 9:28 pm

Dear Davide,

Thanks for the help. I tested using only 1 thread, but it doesn’t work either. Anyhow, I finally worked out. Actually, it is because of pwscf calculations, although I am not sure what happpend in my previous NSCF run. This time I used the exactly same input files but a different parallel structure for pwscf. The Yambo initialization is successfully. I guess probably it's somehow due to the numerical stability of parallelization.

I have a further question regarding to your reply below about the BSE kernel. If such a factor is owing to the k discrete integral, will it actually affect the BSE Hamiltonian? For example, looking at the Eq. 22 of 2009 Yambo code paper, BSE Hamiltonian will contain also the electron-hole energy difference as the kinetic term. It then becomes kind of weird, since electron-hole energy difference does not depend on k-gird but BSE kernel does. My guess would be that the diagonalization of BSE Hamiltonian in Yambo is performed only on the kernel part, and the kinetic term is added later as it's diagonal only. Could you explain a bit more on this point? What is the k-space integral of those BSE matrix elements you metioned as I don't directly see such an integral over q or k grid for BSE?
It is correct. The single matrix element of the kernel becomes smaller and smaller as the k grid size increases.
This happens because what matters is somehow the "k-integral" of the matrix elements. There is no "renormalization" of the Coulomb interaction in the whole space, the pre-factor there is a d3k factor in a discrete integral in k space.
Best,
Changpeng
Changpeng Lin
Doctoral Assistant, EPFL

Post Reply