G0W0 Segmentation fault

Deals with issues related to computation of optical spectra in reciprocal space: RPA, TDDFT, local field effects.

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan

Post Reply
davood
Posts: 29
Joined: Tue Apr 12, 2011 8:07 am

G0W0 Segmentation fault

Post by davood » Sat May 25, 2013 12:58 pm

Dear all
I run the GW runlevel, there was a error as follows:
G0W0 PPA |#####################| [104%] 05d-05h-20m-38s(E)
G0W0 PPA|######################Segmentation fault
------------------------------------GW.in---------------
gw0 # [R GW] GoWo Quasiparticle energy levels
ppa # [R Xp] Plasmon Pole Approximation
rim_cut # [R RIM CUT] Coulomb interaction
setup # [R INI] Initialization
HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc
em1d # [R Xd] Dynamical Inverse Dielectric Matrix
StdoHash= 20 # [IO] Live-timing Hashes
Nelectro= 34.00000 # Electrons number
FFTGvecs= 911 RL # [FFT] Plane-waves
MaxGvecs= 3019 RL # [INI] Max number of G-vectors planned to use
RandQpts= 1000000 # [RIM] Number of random q-points in the BZ
RandGvec= 1 RL # [RIM] Coulomb interaction RS components
#QpgFull # [F RIM] Coulomb interaction: Full matrix
% Em1Anys
0.00 | 0.00 | 0.00 | # [RIM] X Y Z Static Inverse dielectric matrix
%
IDEm1Ref=0 # [RIM] Dielectric matrix reference component 1(x)/2(y)/3(z)
CUTGeo= "box z" # [CUT] Coulomb Cutoff geometry: box/cylinder/sphere
% CUTBox
0.00000 | 0.00000 | 66.00000 | # [CUT] [au] Box sides
%
CUTRadius= 0.000000 # [CUT] [au] Sphere/Cylinder radius
CUTCylLen= 0.000000 # [CUT] [au] Cylinder length
#CUTCol_test # [CUT] Perform a cutoff test in R-space
EXXRLvcs= 2000 RL # [XX] Exchange RL components
XfnQPdb= "none" # [EXTQP Xd] Database
XfnQP_N= 1 # [EXTQP Xd] Interpolation neighbours
% XfnQP_E
0.000000 | 1.000000 | 1.000000 | # [EXTQP Xd] E parameters (c/v) eV|adim|adim
%
% XfnQP_Wv
0.00 | 0.00 | 0.00 | # [EXTQP Xd] W parameters (valence) eV|adim|eV^-1
%
% XfnQP_Wc
0.00 | 0.00 | 0.00 | # [EXTQP Xd] W parameters (conduction) eV|adim|eV^-1
%
XfnQP_Z= ( 1.000000 , 0.000000 ) # [EXTQP Xd] Z factor (c/v)
% QpntsRXp
1 | 91 | # [Xp] Transferred momenta
%
% BndsRnXp
1 | 300 | # [Xp] Polarization function bands
%
NGsBlkXp= 100 RL # [Xp] Response block size
CGrdSpXp= 100.0000 # [Xp] [o/o] Coarse grid controller
% EhEngyXp
-1.000000 |-1.000000 | eV # [Xp] Electron-hole energy range
%
% LongDrXp
1.000000 | 0.000000 | 0.000000 | # [Xp] [cc] Electric Field
%
PPAPntXp= 27.21138 eV # [Xp] PPA imaginary energy
GfnQPdb= "none" # [EXTQP G] Database
GfnQP_N= 1 # [EXTQP G] Interpolation neighbours
% GfnQP_E
0.000000 | 1.000000 | 1.000000 | # [EXTQP G] E parameters (c/v) eV|adim|adim
%
% GfnQP_Wv
0.00 | 0.00 | 0.00 | # [EXTQP G] W parameters (valence) eV|adim|eV^-1
%
% GfnQP_Wc
0.00 | 0.00 | 0.00 | # [EXTQP G] W parameters (conduction) eV|adim|eV^-1
%
GfnQP_Z= ( 1.000000 , 0.000000 ) # [EXTQP G] Z factor (c/v)
% GbndRnge
1 | 400 | # [GW] G[W] bands range
%
GDamping= 0.100000 eV # [GW] G[W] damping
dScStep= 0.100000 eV # [GW] Energy step to evalute Z factors
DysSolver= "n" # [GW] Dyson Equation solver (`n`,`s`,`g`)
#NewtDchk # [F GW] Test dSc/dw convergence
#ExtendOut # [F GW] Print all variables in the output file
%QPkrange # [GW] QP generalized Kpoint/Band indices
1| 91| 1|400|
%
%QPerange # [GW] QP generalized Kpoint/Energy indices
1| 91| 0.0|-1.0|
%
---------------------------------------------------------------
The version of Yambo is yambo-3.2.5.
Can anyone help me to solve this problem?
Any suggestion is appreicated !
thank you!
Dept. of Physics,Faculty of Science,Iran.

User avatar
Daniele Varsano
Posts: 4198
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: G0W0 Segmentation fault

Post by Daniele Varsano » Sat May 25, 2013 1:24 pm

Dear Davood,
in order to help you please post also your report file and the complete log file. Anyway I think that it will be not easy to find the problem.
You are running a very huge calculation!!! And may there are some memory problem.
What I suggest you is to switch to the new version of the code, and next are you sure you need to calculate 400 QP energies for all 91 kpoints?
Usually people is interested to the QP gap and the band structure near the Fermi Energy you can reduce the number of bands.
%QPkrange # [GW] QP generalized Kpoint/Band indices:
1| 91| 1|400|
%
It is just a speculation as I do not know what kind of system it is, without looking at the report file.

Best,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

davood
Posts: 29
Joined: Tue Apr 12, 2011 8:07 am

Re: G0W0 Segmentation fault

Post by davood » Sun May 26, 2013 8:35 am

Dear Daniele
Thank you for reply. I send the report file for you. I have other quastions.
1-I have a PC computer with core i7 (12 processor and 16 Gb RAM ), Can I run the input (GW.in) file with mpiexec?
2-Will my the GW run level run faster if l run many processor instead of one processor? when?

------------------------------------------------------
<---> [01] Files & I/O Directories
<---> [02] CORE Variables Setup
<---> [02.01] Unit cells
<---> [02.02] Symmetries
<---> [02.03] RL shells
<---> [02.04] K-grid lattice
<---> [02.05] Energies [ev] & Occupations
<---> [03] Transferred momenta grid
<---> [04] Coloumb potential Random Integration (RIM)
<---> [05] Coloumb potential CutOff :box z
<---> [06] External QP corrections (X)
<---> [07] External QP corrections (G)
<---> [08] Bare local and non-local Exchange-Correlation
<---> [M 5.997 Gb] Alloc WF ( 5.980)
<02s> [FFT-HF/Rho] Mesh size: 15 15 49
<02s> [WF-HF/Rho loader]
Wfs (re)loading |####################| [100%] 31s(E) 31s(X)
<13h-10m-26s> EXS |####################| [100%] 13h-09m-52s(E) 13h-09m-52s(X)
<13h-10m-26s> [xc] Functional Perdew & Zunger (xc)
<13h-10m-28s> [M 0.017 Gb] Free WF ( 5.980)
<13h-10m-29s> [09] Dynamic Dielectric Matrix (PPA)
<13h-10m-29s> [M 1.030 Gb] Alloc WF ( 0.989)
<13h-10m-30s> [FFT-X] Mesh size: 9 9 30
<13h-10m-30s> [WF-X loader]
Wfs (re)loading |####################| [100%] 24s(E) 24s(X)
<13h-10m-55s> [X-CG] R(p) Tot o/o(of R) : 320798 1558764 100
<13h-12m-39s> Xo@q[1] 1-2 |####################| [100%] 01m-43s(E) 01m-43s(X)
<13h-12m-39s> X @q[1] 1-2 |####################| [100%] --(E) --(X)
<13h-12m-39s> [M 1.030 Gb] Free X_poles_tab RGi BGn CGp CGi ( 0.108)
<13h-12m-41s> [X-CG] R(p) Tot o/o(of R) : 663593 1558764 100
<18h-33m-47s> X @q[91] 1-2 |####################| [100%] --(E) --(X)
<18h-33m-47s> [M 1.030 Gb] Free X_poles_tab RGi BGn CGp CGi ( 0.112)
<18h-33m-47s> [M 0.017 Gb] Free WF ( 0.989)
<18h-33m-47s> [10] Dyson equation: Newton solver
<18h-33m-47s> [10.01] G0W0 : the P(lasmon) P(ole) A(pproximation)
<18h-33m-47s> [M 1.340 Gb] Alloc WF ( 1.318)
<18h-33m-47s> [FFT-SC] Mesh size: 9 9 30
<18h-34m-10s> [WF-SC loader] Wfs (re)loading |####################| [100%] 22s(E) 22s(X)
<05d-11h-58m-53s> G0W0 PPA |################### | [095%] 04d-17h-24m-42s(E)
04d <05d-17h-56m-37s> G0W0 PPA |####################| [100%] 04d-23h-22m-27s(E) 04d
<05d-23h-54m-49s> G0W0 PPA |#####################| [104%] 05d-05h-20m-38s(E) 04
<06d-05h-52m-46s> G0W0 PPA |######################Segmentation fault
You do not have the required permissions to view the files attached to this post.
Dept. of Physics,Faculty of Science,Iran.

User avatar
Daniele Varsano
Posts: 4198
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: G0W0 Segmentation fault

Post by Daniele Varsano » Sun May 26, 2013 5:57 pm

Dear Davood,
sure, yambo is parallelized in many part of the code and if you can use 16 processors you will have a quite linear speedup.
Of course, in order to run in parallel (mpiexec or mpirun depending or your machine) you have to compile it in parallel (compiler mpif90).
When run the configure, at the end you have a checklist, and be sure that the MPI part is checked.
About tour calculation I strongly suggest you to low the number of bands you want to calculate your QP (%QPkrange ) if they are not strictly interested in that very huge number of bands.
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

davood
Posts: 29
Joined: Tue Apr 12, 2011 8:07 am

Re: G0W0 Segmentation fault

Post by davood » Mon May 27, 2013 6:39 am

Dear Daniele
Thank you for reply.
Dept. of Physics,Faculty of Science,Iran.

davood
Posts: 29
Joined: Tue Apr 12, 2011 8:07 am

Re: G0W0 Segmentation fault

Post by davood » Tue May 28, 2013 1:32 pm

Dear Daniele,
We used both of 4 processor and one processor for the GW run level.
The calculation with 4 processor takes a lot of time compared with one processor.
Where is the problem?
We send report files for you.
You do not have the required permissions to view the files attached to this post.
Dept. of Physics,Faculty of Science,Iran.

User avatar
Daniele Varsano
Posts: 4198
Joined: Tue Mar 17, 2009 2:23 pm
Contact:

Re: G0W0 Segmentation fault

Post by Daniele Varsano » Tue May 28, 2013 3:48 pm

Dear Davood,
it looks that your parallel process scales for the screening construction, but fails for the GW calculation. May be people who are working on parallelization can gives you details.

Cheers,

Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/

User avatar
claudio
Posts: 526
Joined: Tue Mar 31, 2009 11:33 pm
Location: Marseille
Contact:

Re: G0W0 Segmentation fault

Post by claudio » Mon Jun 03, 2013 9:31 am

Dear Davood

your calculation is very small, and short, for large systems the parallelism works better.

try also to add the following option to your run -M -S

-M distribute momory among the different processors

-S splits files in small pices, a faster way to do IO

regards
Claudio
Claudio Attaccalite
[CNRS/ Aix-Marseille Université/ CINaM laborarory / TSN department
Campus de Luminy – Case 913
13288 MARSEILLE Cedex 09
web site: http://www.attaccalite.com

Post Reply