very low intensity on the imagine part of dielectronic constants
Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan
-
- Posts: 85
- Joined: Fri May 03, 2013 10:20 am
very low intensity on the imagine part of dielectronic constants
Dear developers
I am doing a BSE calculation on a heterostructure consisting of 2D materials (5*5*1 supercell) and 0D cluster. The obtained eps_2 of dielectric functions is very strange, i.e., the intensity is very low. What is the reason? Is it from the large cell? The input and output files of QE and yambo are attached. Please help to parse it.
I am doing a BSE calculation on a heterostructure consisting of 2D materials (5*5*1 supercell) and 0D cluster. The obtained eps_2 of dielectric functions is very strange, i.e., the intensity is very low. What is the reason? Is it from the large cell? The input and output files of QE and yambo are attached. Please help to parse it.
You do not have the required permissions to view the files attached to this post.
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
- Daniele Varsano
- Posts: 4198
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: very low intensity on the imagine part of dielectronic constants
Dear Zhou,
this happens because you are dealing with a 2D/0D system and macroscopic dielectric function eps it is not well defined (actually it goes to zero for infinite supercell volume). The small number comes from the use of the truncated coulomb cutoff potential that does not diverge as 1/q^2.
The quantity you should look at is the polarizability alpha.
Nevertheless please note that, besides a large multiplicative factor, the imaginary part of epsilon you have in the output it is correct in terms of excitation energies.
You can have a look to this thread:
viewtopic.php?f=13&t=1663&sid=ce2e05525 ... 5f85fbd989
Best,
Daniele
this happens because you are dealing with a 2D/0D system and macroscopic dielectric function eps it is not well defined (actually it goes to zero for infinite supercell volume). The small number comes from the use of the truncated coulomb cutoff potential that does not diverge as 1/q^2.
The quantity you should look at is the polarizability alpha.
Nevertheless please note that, besides a large multiplicative factor, the imaginary part of epsilon you have in the output it is correct in terms of excitation energies.
You can have a look to this thread:
viewtopic.php?f=13&t=1663&sid=ce2e05525 ... 5f85fbd989
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 85
- Joined: Fri May 03, 2013 10:20 am
Re: very low intensity on the imagine part of dielectronic constants
Dear Daniele
Thanks for your kind answer. There is another issue need your kind help.
The input file is the same as the one in the previous attachment. It is very strange that if I use the Yambo version enabling "mpi+openmpi", no whatever what kind of parallelization strategy I used, the job will get aborted at the step of " [06] Static Dielectric Matrix". It may suffer from a memory issue. However, if I switch to the Yambo version disabling OpenMP, the job will end normally. What is the reason?
I posted the unfinished report and log files as follows. As for the normal job, please refer to the previous attachment.
#######Report file "
=============================
[RD./SAVE//ns.kb_pp_pwscf]----------------------------------
Fragmentation :yes
- S/N 003732 -------------------------- v.04.04.00 r.00148 -
[WARNING] [x,Vnl] slows the Dipoles computation. To neglect it rename the ns.kb_pp file
[WF-Oscillators/G space] Performing Wave-Functions I/O from ./SAVE
[WF-Oscillators/G space loader] Normalization (few states) min/max :0.365E-10 1.00
[WR./SAVE//ndb.dip_iR_and_P]--------------------------------
Brillouin Zone Q/K grids (IBZ/BZ): 13 25 13 25
RL vectors (WF): 70619
Fragmentation :yes
Electronic Temperature [K]: 0.000000
Bosonic Temperature [K]: 0.000000
X band range : 1 770
X band range limits : 385 386
X e/h energy range [ev]:-1.000000 -1.000000
RL vectors in the sum : 70619
[r,Vnl] included :yes
Using shifted grids :no
Using covariant dipoles:no
Using G-space approach :yes
Using R-space approach :no
Direct v evaluation :no
Field momentum norm :0.1000E-4
Wavefunctions :Perdew, Burke & Ernzerhof(X)+Perdew, Burke & Ernzerhof(C)
- S/N 003732 -------------------------- v.04.04.00 r.00148 -
[WF-X] Performing Wave-Functions I/O from ./SAVE
[FFT-X] Mesh size: 44 44 84"
########log file
"
<---> P0001: [01] CPU structure, Files & I/O Directories
<---> P0001: CPU-Threads:12(CPU)-8(threads)-4(threads@X)-4(threads@DIP)-4(threads@K)
<---> P0001: CPU-Threads:X_all_q(environment)-1 1 4 3(CPUs)-q k c v(ROLEs)
<---> P0001: CPU-Threads:BS(environment)-1 1 12(CPUs)-k eh t(ROLEs)
<---> P0001: [02] CORE Variables Setup
....
....
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_9
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_10
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_11
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_12
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_13
<14h-43m-37s> P0001: [MEMORY] Alloc DIP_projected( 28.95000Mb) TOTAL: 445.1850Mb (traced) 462.1720Mb (memstat)
<14h-43m-37s> P0001: [MEMORY] Free DIP_iR( 45.16200Mb) TOTAL: 400.0230Mb (traced) 462.1720Mb (memstat)
<14h-43m-37s> P0001: [MEMORY] Free DIP_P( 45.16200Mb) TOTAL: 354.8610Mb (traced) 417.0080Mb (memstat)
<14h-43m-37s> P0001: [MEMORY] Alloc WF%c( 3.732729Gb) TOTAL: 4.092529Gb (traced)
<14h-43m-37s> P0001: [PARALLEL distribution for Wave-Function states] Loaded/Total(Percentual):2938/10010(29%)
<14h-43m-39s> P0001: [WF-X] Performing Wave-Functions I/O from ./SAVE
<14h-43m-39s> P0001: [FFT-X] Mesh size: 44 44 84
<14h-43m-39s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-39s> P0001: Reading wf_fragments_1_1
<14h-43m-39s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-39s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-39s> P0001: Reading wf_fragments_2_1
<14h-43m-40s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: Reading wf_fragments_3_1
<14h-43m-40s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: Reading wf_fragments_4_1
<14h-43m-41s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-41s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-41s> P0001: Reading wf_fragments_5_1
<14h-43m-42s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: Reading wf_fragments_6_1
<14h-43m-42s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: Reading wf_fragments_7_1
<14h-43m-43s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: Reading wf_fragments_8_1
<14h-43m-43s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: Reading wf_fragments_9_1
<14h-43m-44s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-44s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-44s> P0001: Reading wf_fragments_10_1
<14h-43m-45s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: Reading wf_fragments_11_1
<14h-43m-45s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: Reading wf_fragments_12_1
<14h-43m-46s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-46s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
"
Thanks for your kind answer. There is another issue need your kind help.
The input file is the same as the one in the previous attachment. It is very strange that if I use the Yambo version enabling "mpi+openmpi", no whatever what kind of parallelization strategy I used, the job will get aborted at the step of " [06] Static Dielectric Matrix". It may suffer from a memory issue. However, if I switch to the Yambo version disabling OpenMP, the job will end normally. What is the reason?
I posted the unfinished report and log files as follows. As for the normal job, please refer to the previous attachment.
#######Report file "
=============================
[RD./SAVE//ns.kb_pp_pwscf]----------------------------------
Fragmentation :yes
- S/N 003732 -------------------------- v.04.04.00 r.00148 -
[WARNING] [x,Vnl] slows the Dipoles computation. To neglect it rename the ns.kb_pp file
[WF-Oscillators/G space] Performing Wave-Functions I/O from ./SAVE
[WF-Oscillators/G space loader] Normalization (few states) min/max :0.365E-10 1.00
[WR./SAVE//ndb.dip_iR_and_P]--------------------------------
Brillouin Zone Q/K grids (IBZ/BZ): 13 25 13 25
RL vectors (WF): 70619
Fragmentation :yes
Electronic Temperature [K]: 0.000000
Bosonic Temperature [K]: 0.000000
X band range : 1 770
X band range limits : 385 386
X e/h energy range [ev]:-1.000000 -1.000000
RL vectors in the sum : 70619
[r,Vnl] included :yes
Using shifted grids :no
Using covariant dipoles:no
Using G-space approach :yes
Using R-space approach :no
Direct v evaluation :no
Field momentum norm :0.1000E-4
Wavefunctions :Perdew, Burke & Ernzerhof(X)+Perdew, Burke & Ernzerhof(C)
- S/N 003732 -------------------------- v.04.04.00 r.00148 -
[WF-X] Performing Wave-Functions I/O from ./SAVE
[FFT-X] Mesh size: 44 44 84"
########log file
"
<---> P0001: [01] CPU structure, Files & I/O Directories
<---> P0001: CPU-Threads:12(CPU)-8(threads)-4(threads@X)-4(threads@DIP)-4(threads@K)
<---> P0001: CPU-Threads:X_all_q(environment)-1 1 4 3(CPUs)-q k c v(ROLEs)
<---> P0001: CPU-Threads:BS(environment)-1 1 12(CPUs)-k eh t(ROLEs)
<---> P0001: [02] CORE Variables Setup
....
....
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_9
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_10
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_11
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_12
<14h-43m-37s> P0001: Writing dip_iR_and_P_fragment_13
<14h-43m-37s> P0001: [MEMORY] Alloc DIP_projected( 28.95000Mb) TOTAL: 445.1850Mb (traced) 462.1720Mb (memstat)
<14h-43m-37s> P0001: [MEMORY] Free DIP_iR( 45.16200Mb) TOTAL: 400.0230Mb (traced) 462.1720Mb (memstat)
<14h-43m-37s> P0001: [MEMORY] Free DIP_P( 45.16200Mb) TOTAL: 354.8610Mb (traced) 417.0080Mb (memstat)
<14h-43m-37s> P0001: [MEMORY] Alloc WF%c( 3.732729Gb) TOTAL: 4.092529Gb (traced)
<14h-43m-37s> P0001: [PARALLEL distribution for Wave-Function states] Loaded/Total(Percentual):2938/10010(29%)
<14h-43m-39s> P0001: [WF-X] Performing Wave-Functions I/O from ./SAVE
<14h-43m-39s> P0001: [FFT-X] Mesh size: 44 44 84
<14h-43m-39s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-39s> P0001: Reading wf_fragments_1_1
<14h-43m-39s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-39s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-39s> P0001: Reading wf_fragments_2_1
<14h-43m-40s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: Reading wf_fragments_3_1
<14h-43m-40s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-40s> P0001: Reading wf_fragments_4_1
<14h-43m-41s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-41s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-41s> P0001: Reading wf_fragments_5_1
<14h-43m-42s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: Reading wf_fragments_6_1
<14h-43m-42s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-42s> P0001: Reading wf_fragments_7_1
<14h-43m-43s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: Reading wf_fragments_8_1
<14h-43m-43s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-43s> P0001: Reading wf_fragments_9_1
<14h-43m-44s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-44s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-44s> P0001: Reading wf_fragments_10_1
<14h-43m-45s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: Reading wf_fragments_11_1
<14h-43m-45s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
<14h-43m-45s> P0001: Reading wf_fragments_12_1
<14h-43m-46s> P0001: [MEMORY] Free wf_disk( 334.2760Mb) TOTAL: 4.095110Gb (traced) 244.5520Mb (memstat)
<14h-43m-46s> P0001: [MEMORY] Alloc wf_disk( 334.2760Mb) TOTAL: 4.429386Gb (traced) 244.5520Mb (memstat)
"
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
- Daniele Varsano
- Posts: 4198
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: very low intensity on the imagine part of dielectronic constants
Dear Zhou,
it sounds strange:
up to the log file is shown, the run is using the same memory as before:
Previous run without OMP:
It could help if you post the last part of the log file before the run aborts, and also the complete report (you can upload in the post after renaming it as .txt).
Next, how many cores have your node? You are using 12 MPI task, depending on how many cores there are in your node you can assign a certain number of threads unless you want to run in multithread mode, but I do not know how much it would be efficient.
Best,
Daniele
it sounds strange:
up to the log file is shown, the run is using the same memory as before:
Previous run without OMP:
Code: Select all
<01d-15h-28m-20s> P0010: Reading wf_fragments_12_2
<01d-15h-28m-20s> P0010: [MEMORY] Free wf_disk( 154.2200Mb) TOTAL: 4.100526Gb (traced) 66.01600Mb (memstat)
<01d-15h-28m-20s> P0010: [MEMORY] Alloc wf_disk( 207.2330Mb) TOTAL: 4.307759Gb (traced) 119.0280Mb (memstat)
Next, how many cores have your node? You are using 12 MPI task, depending on how many cores there are in your node you can assign a certain number of threads unless you want to run in multithread mode, but I do not know how much it would be efficient.
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 85
- Joined: Fri May 03, 2013 10:20 am
Re: very low intensity on the imagine part of dielectronic constants
Dear Daniele
I attached all these report, log, input (07bse) and submission script (ljrun.sh) files for you to parse this strange job.
I am using 1 node with 48 cores to do this calculation. To assign more memory per task, I only call 12 MPI cores. You can see my submission script "ljrun.sh"
I attached all these report, log, input (07bse) and submission script (ljrun.sh) files for you to parse this strange job.
I am using 1 node with 48 cores to do this calculation. To assign more memory per task, I only call 12 MPI cores. You can see my submission script "ljrun.sh"
You do not have the required permissions to view the files attached to this post.
Dr. Zhou Liu-Jiang
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
Fujian Institute of Research on the Structure of Matter
Chinese Academy of Sciences
Fuzhou, Fujian, 350002
-
- Posts: 44
- Joined: Fri Feb 28, 2014 10:23 pm
Re: very low intensity on the imagine part of dielectronic constants
Dear Daniele
I also have a few questions on the imaginary part of the dielectric constant in 2D systems.
1) Does Yambo write in the output or report the scaling factor used to reduce the immaginary part?
2) If I want compare BSE and independent particle or hartree how can I do? the second one is not scaled
3) Is it possible to calculate the effective dielectric constant of a 2D systems? usually in experimentalist use an effective thickness
equal to the planes distance in the corresponding solid to define the effective 2D dielectric constant, may I do this with Yambo?
best regards
Javad
I also have a few questions on the imaginary part of the dielectric constant in 2D systems.
1) Does Yambo write in the output or report the scaling factor used to reduce the immaginary part?
2) If I want compare BSE and independent particle or hartree how can I do? the second one is not scaled
3) Is it possible to calculate the effective dielectric constant of a 2D systems? usually in experimentalist use an effective thickness
equal to the planes distance in the corresponding solid to define the effective 2D dielectric constant, may I do this with Yambo?
best regards
Javad
Javad Exirifard
IPM - Institute for Research in Fundamental Sciences
P. O. Box 19395-5746
Niavaran Square
Tehran, Iran
IPM - Institute for Research in Fundamental Sciences
P. O. Box 19395-5746
Niavaran Square
Tehran, Iran
- Daniele Varsano
- Posts: 4198
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: very low intensity on the imagine part of dielectronic constants
Dear Javad,
1) in the eps there is not a scaling factor, the small intensity comes from the fact that the q^2 behaviour of X it is not balanced by the coulomb potential that differs from 1/q^2 when truncated potential is used. This is whay you should look at the alpha.
2) BSE and independent particle can be compared (you need to look to alpha again). What do you mean by Hartree? calculated in Gspace or transition space?
May this tutorial can answer to your questions:
http://www.yambo-code.org/wiki/index.ph ... al_systems
3) Probably yes, from the definition of alpha you should be able to do that.
Best,
Daniele
1) in the eps there is not a scaling factor, the small intensity comes from the fact that the q^2 behaviour of X it is not balanced by the coulomb potential that differs from 1/q^2 when truncated potential is used. This is whay you should look at the alpha.
2) BSE and independent particle can be compared (you need to look to alpha again). What do you mean by Hartree? calculated in Gspace or transition space?
May this tutorial can answer to your questions:
http://www.yambo-code.org/wiki/index.ph ... al_systems
3) Probably yes, from the definition of alpha you should be able to do that.
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 214
- Joined: Fri Jan 31, 2014 11:13 am
Re: very low intensity on the imagine part of dielectronic constants
dear Zhou,
Actually, while the yambo OpenMP implementation tends not to allocate extra memory, in some cases it does for the sake
of performance. This is eg true for the calculation of the response function, where a Xo(GG') workspace is allocated by each thread.
Since your calculation is quite memory tight, this may indeed result in a crash.
In general, if dipoles already take 15h, the response function may even take longer, and I would recommend to increase the amount of nodes
used in the calculation (if possible).
Andrea
the problem you observe may indeed be related to memory.I attached all these report, log, input (07bse) and submission script (ljrun.sh) files for you to parse this strange job.
I am using 1 node with 48 cores to do this calculation. To assign more memory per task, I only call 12 MPI cores. You can see my submission script "ljrun.sh"
Actually, while the yambo OpenMP implementation tends not to allocate extra memory, in some cases it does for the sake
of performance. This is eg true for the calculation of the response function, where a Xo(GG') workspace is allocated by each thread.
Since your calculation is quite memory tight, this may indeed result in a crash.
In general, if dipoles already take 15h, the response function may even take longer, and I would recommend to increase the amount of nodes
used in the calculation (if possible).
Andrea
Andrea Ferretti, PhD
CNR-NANO-S3 and MaX Centre
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322; Skype: andrea_ferretti
URL: http://www.nano.cnr.it
CNR-NANO-S3 and MaX Centre
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322; Skype: andrea_ferretti
URL: http://www.nano.cnr.it