Dear Yambo users,
I recently changed to yambo-4.0.1 and I could successfully compile it on the supercomputer. However, running the examples provided in the webpage, I end up with the following error:
[05] Dynamical Dielectric Matrix
================================
[ERROR] STOP signal received while in :[05] Dynamical Dielectric Matrix
[ERROR]Impossible to define an appropriate parallel structure
Can anybody help me with this please? I attached the config.log file here.
Best regards,
Javad Hashemi
Impossible to define an appropriate parallel structure
Moderators: myrta gruning, andrea marini, Daniele Varsano, Conor Hogan
-
- Posts: 11
- Joined: Wed Jun 10, 2015 2:03 pm
- Location: Helsinki
Impossible to define an appropriate parallel structure
You do not have the required permissions to view the files attached to this post.
PostDoctoral Researcher
University of Helsinki
University of Helsinki
- Daniele Varsano
- Posts: 4198
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: Impossible to define an appropriate parallel structure
Dear Javad,
can you please post the input/report file and the script you used to run the job?
Best,
Daniele
can you please post the input/report file and the script you used to run the job?
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 11
- Joined: Wed Jun 10, 2015 2:03 pm
- Location: Helsinki
Re: Impossible to define an appropriate parallel structure
Hello Daniele and tanks for your answer.
Here is the input:
================================
gw0 # [R GW] GoWo Quasiparticle energy levels
em1d # [R Xd] Dynamical Inverse Dielectric Matrix
HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc
X_all_q_CPU= "1 24 1 1" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q k c v" # [PARALLEL] CPUs roles (q,k,c,v)
X_all_q_nCPU_invert=0 # [PARALLEL] CPUs for matrix inversion
SE_CPU= "1 24 1" # [PARALLEL] CPUs for each role
SE_ROLEs= "q qp b" # [PARALLEL] CPUs roles (q,qp,b)
EXXRLvcs= 949 RL # [XX] Exchange RL components
Chimod= "hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc
% GbndRnge
1 | 250 | # [GW] G[W] bands range
%
GDamping= 0.10000 eV # [GW] G[W] damping
dScStep= 0.10000 eV # [GW] Energy step to evalute Z factors
% BndsRnXd
1 | 250 | # [Xd] Polarization function bands
%
NGsBlkXd= 10 RL # [Xd] Response block size
% DmRngeXd
0.10000 | 0.10000 | eV # [Xd] Damping range
%
ETStpsXd= 100 # [Xd] Total Energy steps
% LongDrXd
1.000000 | 0.000000 | 0.000000 | # [Xd] [cc] Electric Field
%
GTermKind= "none" # [GW] GW terminator ("none","BG" Bruneval-Gonze,"BRS" Berger-Reining-Sottile)
DysSolver= "s" # [GW] Dyson Equation solver (`n`,`s`,`g`)
%QPkrange # [GW] QP generalized Kpoint/Band indices
1|500| 75|85|
%
%QPerange # [GW] QP generalized Kpoint/Energy indices
1|500| 0.0|-1.0|
%
==========================
The submit scrip:
#!/bin/bash
#SBATCH -J GW10
#SBATCH -p test
#SBATCH -n 24
#SBATCH -N 1
#SBATCH --constraint=hsw
#SBATCH -t 00:28:00
#SBATCH --mem=128000
#SBATCH --exclusive
#SBATCH -o out_%j.out
#SBATCH -e error_%j.err
srun ~/yambo/yambo-4.0.1-rev.89/bin/yambo -F yambo.in
======================================
And the header of the output:
[01] CPU structure, Files & I/O Directories
===========================================
* CPU-Threads :24(CPU)-1(threads)-1(threads@X)-1(threads@DIP)-1(threads@SE)-1(threads@RT)-1(threads@K)
* CPU-Threads :X_all_q(environment)-1 24 1 1(CPUs)-q k c v(ROLEs)
* CPU-Threads :SE(environment)-1 24 1(CPUs)-q qp b(ROLEs)
* MPI CHAINS : 4
* MPI CPU : 24
* THREADS (max): 1
* THREADS TOT(max): 24
* I/O NODES : 1
* Fragmented I/O :yes
CORE databases in .
Additional I/O in .
Communications in .
Input file is yambo.in
Report file is ./r-log_em1d_HF_and_locXC_gw0
Job string(main): log
Log files in ./LOG
[RD./SAVE//ns.db1]------------------------------------------
Bands : 250
K-points : 500
G-vectors [RL space]: 7593
Components [wavefunctions]: 6518
Symmetries [spatial+T-rev]: 2
Spinor components : 2
Spin polarizations : 1
Temperature [ev]: 0.000000
Electrons : 80.00000
WF G-vectors : 7376
Max atoms/species : 6
No. of atom species : 5
Magnetic symmetries : no
- S/N 003911 --------------------------- v.04.00.01 r.0088 -
These are the files for my own calculation. I wonder what can go wrong here.
I also would like to know about how one should choose the CPU numbers that we choose for parallelization. Should they add up to the total number of CPUs in one node, or the total CPUs that we would like to use in the calculation.
Thanks again.
Best regards,
Javad
Here is the input:
================================
gw0 # [R GW] GoWo Quasiparticle energy levels
em1d # [R Xd] Dynamical Inverse Dielectric Matrix
HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc
X_all_q_CPU= "1 24 1 1" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q k c v" # [PARALLEL] CPUs roles (q,k,c,v)
X_all_q_nCPU_invert=0 # [PARALLEL] CPUs for matrix inversion
SE_CPU= "1 24 1" # [PARALLEL] CPUs for each role
SE_ROLEs= "q qp b" # [PARALLEL] CPUs roles (q,qp,b)
EXXRLvcs= 949 RL # [XX] Exchange RL components
Chimod= "hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc
% GbndRnge
1 | 250 | # [GW] G[W] bands range
%
GDamping= 0.10000 eV # [GW] G[W] damping
dScStep= 0.10000 eV # [GW] Energy step to evalute Z factors
% BndsRnXd
1 | 250 | # [Xd] Polarization function bands
%
NGsBlkXd= 10 RL # [Xd] Response block size
% DmRngeXd
0.10000 | 0.10000 | eV # [Xd] Damping range
%
ETStpsXd= 100 # [Xd] Total Energy steps
% LongDrXd
1.000000 | 0.000000 | 0.000000 | # [Xd] [cc] Electric Field
%
GTermKind= "none" # [GW] GW terminator ("none","BG" Bruneval-Gonze,"BRS" Berger-Reining-Sottile)
DysSolver= "s" # [GW] Dyson Equation solver (`n`,`s`,`g`)
%QPkrange # [GW] QP generalized Kpoint/Band indices
1|500| 75|85|
%
%QPerange # [GW] QP generalized Kpoint/Energy indices
1|500| 0.0|-1.0|
%
==========================
The submit scrip:
#!/bin/bash
#SBATCH -J GW10
#SBATCH -p test
#SBATCH -n 24
#SBATCH -N 1
#SBATCH --constraint=hsw
#SBATCH -t 00:28:00
#SBATCH --mem=128000
#SBATCH --exclusive
#SBATCH -o out_%j.out
#SBATCH -e error_%j.err
srun ~/yambo/yambo-4.0.1-rev.89/bin/yambo -F yambo.in
======================================
And the header of the output:
[01] CPU structure, Files & I/O Directories
===========================================
* CPU-Threads :24(CPU)-1(threads)-1(threads@X)-1(threads@DIP)-1(threads@SE)-1(threads@RT)-1(threads@K)
* CPU-Threads :X_all_q(environment)-1 24 1 1(CPUs)-q k c v(ROLEs)
* CPU-Threads :SE(environment)-1 24 1(CPUs)-q qp b(ROLEs)
* MPI CHAINS : 4
* MPI CPU : 24
* THREADS (max): 1
* THREADS TOT(max): 24
* I/O NODES : 1
* Fragmented I/O :yes
CORE databases in .
Additional I/O in .
Communications in .
Input file is yambo.in
Report file is ./r-log_em1d_HF_and_locXC_gw0
Job string(main): log
Log files in ./LOG
[RD./SAVE//ns.db1]------------------------------------------
Bands : 250
K-points : 500
G-vectors [RL space]: 7593
Components [wavefunctions]: 6518
Symmetries [spatial+T-rev]: 2
Spinor components : 2
Spin polarizations : 1
Temperature [ev]: 0.000000
Electrons : 80.00000
WF G-vectors : 7376
Max atoms/species : 6
No. of atom species : 5
Magnetic symmetries : no
- S/N 003911 --------------------------- v.04.00.01 r.0088 -
These are the files for my own calculation. I wonder what can go wrong here.
I also would like to know about how one should choose the CPU numbers that we choose for parallelization. Should they add up to the total number of CPUs in one node, or the total CPUs that we would like to use in the calculation.
Thanks again.
Best regards,
Javad
PostDoctoral Researcher
University of Helsinki
University of Helsinki
- Daniele Varsano
- Posts: 4198
- Joined: Tue Mar 17, 2009 2:23 pm
- Contact:
Re: Impossible to define an appropriate parallel structure
Dear Javad,
24 is not a power of two
Next, please try to attach the input and complete report file together with the error you have in the log files. You can upload them as .tar.gz file.
X_all_q_CPU= "2 2 2 2" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q k c v" # [PARALLEL] CPUs roles (q,k,c,v)
this correspond on a 16 MPI run.
Best,
Daniele
24 is not a power of two

Next, please try to attach the input and complete report file together with the error you have in the log files. You can upload them as .tar.gz file.
There is a tutorial on the parallel structure in the web page. It is not surely complete, but anyway you can have an idea.also would like to know about how one should choose the CPU numbers that we choose for parallelization.
The should correspond to the amount of CPU you aim to use. Please note that they should not *add up* but the product of them ie:Should they add up to the total number of CPUs in one node, or the total CPUs that we would like to use in the calculation.
X_all_q_CPU= "2 2 2 2" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q k c v" # [PARALLEL] CPUs roles (q,k,c,v)
this correspond on a 16 MPI run.
Best,
Daniele
Dr. Daniele Varsano
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
S3-CNR Institute of Nanoscience and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu/
-
- Posts: 11
- Joined: Wed Jun 10, 2015 2:03 pm
- Location: Helsinki
Re: Impossible to define an appropriate parallel structure
Hi Daniele,
Thank you for your answer. I actually read the tutorial but I apparently missed it if there was mention of "power of two" there
Kind regards,
Javad
Thank you for your answer. I actually read the tutorial but I apparently missed it if there was mention of "power of two" there

Kind regards,
Javad
PostDoctoral Researcher
University of Helsinki
University of Helsinki