Dirac point in graphene
Posted: Wed Feb 17, 2010 5:28 pm
Greetings all
I have been unsuccessful at generating a proper band structure for graphene using the GW options and I need help if someone in this forum has done it before. Let me begin by describing the process:
(1) Run a PW scf calculation (Quantum Espresso)
(2) Run a PW nscf calculation
(3) p2y -N at command line (not submitted to parallel process queue)
(4) yambo -i -V 2 (to see the number of plane waves being used in init run)
(5) yambo
(6) yambo -x -p p -g n -V 2 (generate the input file)
(7) yambo
Attached is a tarball of representative, but by no means complete, files of these calculations.
The problem is the persistent gap at the Dirac point. The gap has improved from my initial runs (0.8 eV down to 0.02 eV) but I am unable to close the gap completely.
QP [eV] @ K [469] (iku): 0.333333 0.500000 0.000000
B=1 Eo=-12.44 E=-18.35 E-Eo= -5.91 Z=0.86 So= 1.62981 xx=-24.17079 Vxc=-15.69168
B=2 Eo=-12.44 E=-18.28 E-Eo= -5.83 Z=0.86 So= 1.61910 xx=-23.90832 Vxc=-15.53331
B=3 Eo=-10.78 E=-14.96 E-Eo= -4.18 Z=0.87 So= 1.56363 xx=-21.25340 Vxc=-14.87560
B=4 Eo= 0.00 E= 1.14 E-Eo= 1.14 Z=0.89 So= 0.14934 xx=-11.73359 Vxc=-12.85598
B=5 Eo= 0.00 E= 1.16 E-Eo= 1.16 Z=0.89 So=-.1587 xx=-11.46 Vxc=-12.92
The variables I am changing (I think they are the most relevant to convergence) are the FFTGvecs, EXXRLvcs, GbndRange, and NGsBlkXp values. I have tested many different permutations of these variables and have watched the band structure (specifically the Dirac point) converge only to fail due to WF allocation failures (which I have mentioned before in previous posts). Lately, I have had access to an XT system with more memory and have seen better results, however still have failed to close the gap and I fear that my parameters for FFTGvecs and EXXRvcs are so small now that I am not converged and am fooling myself with a mistaken gap; it is only what I want by accident, not by convergence. Again, however, when increasing the number of vecs, bands, or NGsBlkXp, I cause the run to crash as before with an error message like below:
<---> [01] Job Setup
<---> [02] Input variables setup
<---> [02.01] Unit cells
<---> [02.02] Symmetries
<---> [02.03] RL shells
<---> [02.04] K-grid lattice
<09s> [02.05] Energies [ev] & Occupations
<10s> [03] Transferred momenta grid
<10s> [M 0.046 Gb] Alloc qindx_X qindx_S (0.019)
<13s> [04] External QP corrections (X)
<13s> [05] External QP corrections (G)
<13s> [06] EX(change)S(elf-energy) and Vxc potential
<13s> [M 2.054 Gb] Alloc WF (2.002)_pmii_daemon(SIGCHLD): PE 276 exit signal Killed
At step (1), I have converged graphene input files, at step (2), I have tried different k grid densities in order to have more QP's available in the GW calculation, I have run step (3) in parallel mode as well as single processor, but single processor is just easier since the configuration is small and does not require much time. At step (4), I have edited for smaller numbers of vectors as well as run defaults and the final steps have been run with so many different configuration tests that I cannot hope to address them here.
Any help is greatly appreciated, as I cannot move on to my actual topic of interest until I have successfully managed this planar system calculation.
I have been unsuccessful at generating a proper band structure for graphene using the GW options and I need help if someone in this forum has done it before. Let me begin by describing the process:
(1) Run a PW scf calculation (Quantum Espresso)
(2) Run a PW nscf calculation
(3) p2y -N at command line (not submitted to parallel process queue)
(4) yambo -i -V 2 (to see the number of plane waves being used in init run)
(5) yambo
(6) yambo -x -p p -g n -V 2 (generate the input file)
(7) yambo
Attached is a tarball of representative, but by no means complete, files of these calculations.
The problem is the persistent gap at the Dirac point. The gap has improved from my initial runs (0.8 eV down to 0.02 eV) but I am unable to close the gap completely.
QP [eV] @ K [469] (iku): 0.333333 0.500000 0.000000
B=1 Eo=-12.44 E=-18.35 E-Eo= -5.91 Z=0.86 So= 1.62981 xx=-24.17079 Vxc=-15.69168
B=2 Eo=-12.44 E=-18.28 E-Eo= -5.83 Z=0.86 So= 1.61910 xx=-23.90832 Vxc=-15.53331
B=3 Eo=-10.78 E=-14.96 E-Eo= -4.18 Z=0.87 So= 1.56363 xx=-21.25340 Vxc=-14.87560
B=4 Eo= 0.00 E= 1.14 E-Eo= 1.14 Z=0.89 So= 0.14934 xx=-11.73359 Vxc=-12.85598
B=5 Eo= 0.00 E= 1.16 E-Eo= 1.16 Z=0.89 So=-.1587 xx=-11.46 Vxc=-12.92
The variables I am changing (I think they are the most relevant to convergence) are the FFTGvecs, EXXRLvcs, GbndRange, and NGsBlkXp values. I have tested many different permutations of these variables and have watched the band structure (specifically the Dirac point) converge only to fail due to WF allocation failures (which I have mentioned before in previous posts). Lately, I have had access to an XT system with more memory and have seen better results, however still have failed to close the gap and I fear that my parameters for FFTGvecs and EXXRvcs are so small now that I am not converged and am fooling myself with a mistaken gap; it is only what I want by accident, not by convergence. Again, however, when increasing the number of vecs, bands, or NGsBlkXp, I cause the run to crash as before with an error message like below:
<---> [01] Job Setup
<---> [02] Input variables setup
<---> [02.01] Unit cells
<---> [02.02] Symmetries
<---> [02.03] RL shells
<---> [02.04] K-grid lattice
<09s> [02.05] Energies [ev] & Occupations
<10s> [03] Transferred momenta grid
<10s> [M 0.046 Gb] Alloc qindx_X qindx_S (0.019)
<13s> [04] External QP corrections (X)
<13s> [05] External QP corrections (G)
<13s> [06] EX(change)S(elf-energy) and Vxc potential
<13s> [M 2.054 Gb] Alloc WF (2.002)_pmii_daemon(SIGCHLD): PE 276 exit signal Killed
At step (1), I have converged graphene input files, at step (2), I have tried different k grid densities in order to have more QP's available in the GW calculation, I have run step (3) in parallel mode as well as single processor, but single processor is just easier since the configuration is small and does not require much time. At step (4), I have edited for smaller numbers of vectors as well as run defaults and the final steps have been run with so many different configuration tests that I cannot hope to address them here.
Any help is greatly appreciated, as I cannot move on to my actual topic of interest until I have successfully managed this planar system calculation.