nonphysically large HF corrections

Concerns issues with computing quasiparticle corrections to the DFT eigenvalues - i.e., the self-energy within the GW approximation (-g n), or considering the Hartree-Fock exchange only (-x)

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano

Post Reply
riemann
Posts: 5
Joined: Sun Oct 04, 2015 9:54 am

nonphysically large HF corrections

Post by riemann » Thu May 20, 2021 9:21 am

Dear All,

I'm running Yambo to calculate the GW correction to the energy gap pf the "PbTe". The calculated HF corrections are unphysically high. While during the Yambo parameters convergence it was producing the reasonable values and once I replaced the parameters with the converged one I got unphysically high HF corrections which affects the size of gap dramatically-the expected gap in the presence of the spin-orbit interaction would be around 0.25 eV which now is about 4 eV. My Input and output (r_setup,input and output) files are attached. I will highly appreciate it if guide me on where does this issue comes from and how to fix it.

Thank you in advance.

Regards,
Vahid
You do not have the required permissions to view the files attached to this post.

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: nonphysically large HF corrections

Post by Daniele Varsano » Thu May 20, 2021 10:06 am

Dear Vahid,
please sign your post with your full name and affiliation, this is a rule of the forum and you can do once for all by filling the signature in your user profile.

It is not totally clear to me what do you mean by "While during the Yambo parameters convergence it was producing the reasonable values and once I replaced the parameters with the converged one I got unphysical high HF corrections".

Indeed you are unphysical values, there are some issues I report below, but most probably the code got confused with the parallel strategy:

Code: Select all

 SE_CPU= "40.1.60"                # [PARALLEL] CPUs for each role
 SE_ROLEs= "q.qp.q"               # [PARALLEL] CPUs roles (q,qp,b)
Note that there is a misspelling and q is repeated twice instead of b.
As a general remark, I suggest you to avoid the use of parallelisation on q as it is largely unbalanced. May be you can calculate theHF part in a separate run using less CPU.


Anyway, looking at your report I can see two issues:
1) You are using pseudopopotential containing non-linear core corrections, even if this is allowed it is somehow discouraged as while they can be taken into account in the local term Vxc, they are not taken into account in the Fock term, for the same reason they are discouraged in QE when using hybrid functionals. In any case in order to take them into account you need to activate the keyword UseNLCC in the input file. This anyway do not guarantee the final results to be accurate for the reason expand above. The best is to use PP without nlcc.

2) You are reducing the VXCRLvcs value with respect the default (cutoff of the density), this could be critical in the case of PBE functional where the gradient of the density needs to be evaluated.

Note that yambo reports in the report file the xc energy:

Code: Select all

[xc] E_xc :  -17.21048 [Ha]
      E_xc :  -34.42095 [Ry]
This should be compared with the one reporting in the QE calculation, in your case they will match only if you include NLCC.

Note also that the EXXRLvcs could be rather small.

Another issue, even if not dramatic, in the MonteCarlo average of the coulomb potential, RandQpts should be of the order of 1-3 millions, you are using a rather small sampling.

Having said that, please note that you have extremely low Z factor, which usually is of the order of 0.7-1, and this makes questionable all the calculation, but this is probably due to the wrong large HF values you are obtaining.

In any case I would first concentrate to understand what went wrong in the HF calculation.
I do not think that all these issue can be the responsible of that large values, most probably the code got confused with the parallel strategy.


Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

riemann
Posts: 5
Joined: Sun Oct 04, 2015 9:54 am

Re: nonphysically large HF corrections

Post by riemann » Fri May 21, 2021 7:28 am

Dear Daniele,

Thank you so much for your prompt reply.

I've modified my input file accordingly (except using UseNLCC, to track what is really causing this unphysical corrections). But I got another error as below :
====================================================================================
[ERROR] STOP signal received while in[06] Local Exchange-Correlation + Non-Local Fock
[ERROR]USER parallel structure does not fit the current run parameters. 60 CPU for 32 elements (ROLE is 'b'
====================================================================================

I will be highly thankful if you guide me on where does this issue comes and how can fix it. Also, my input and output files are attached for your considerations.

Thank you in advance.

Regards,
Vahid

--Dr. Vahid Derakhshan Maman
Postdoctoral Research AssociateUtrecht University, Debye Institute for Nanomaterials Science
Heidelberglaan 83584 CS Utrecht
You do not have the required permissions to view the files attached to this post.

User avatar
Daniele Varsano
Posts: 3773
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: nonphysically large HF corrections

Post by Daniele Varsano » Fri May 21, 2021 9:05 am

Dear Vahid,
this is just because HF term requires a summation on occupied states only.
You have 30 occupied bands so you cannot assign 60 cpu in the "b" role of the self-energy.

Actually you are using a very large number of cpu, I suggest you to reduce them for this testing purpose and eventually use more resourced for the production runs.

Also, as you want to check HF first, you may want to run an HF calculation directly, now you are calculating the screening first. In order to do that just remove these lines from the input:

Code: Select all

ppa                                               # [R Xp] Plasmon Pole Approximation
gw0                                               # [R GW] GoWo Quasiparticle energy levels
em1d                                              # [R Xd] Dynamical Inverse Dielectric Matrix

Next, you are running on top of a previous calculation:

Code: Select all

[RD./CONV//ndb.HF_and_locXC]----------------------------------------------------
RD stands for read, next yambo check that it is incompatible with the actual input parameters (see ERR) and then recalculate it, in general, for sake of clarity, I suggest you to remove that file first, as while ERR will make yambo to recalculate if you WARNING the code does not do that.

Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

Post Reply