em1s too time consuming
Posted: Fri Nov 13, 2015 9:26 am
Hello,
I try to calculate the screening and I want to preceed afterwards with a BSE-calculation.
I'm working on a cluster. Since the calculation needs lots of memory I need to use a fat node on which I can not use more than 16 procs.
The time limit for such calculations on the cluster is 3 days.
After these 3 days the calculation ends without beeing completed.
Is there any possibility to interrupt the run and restart it from the interruption point?
Or can anybody tell me how to reduce the computational cost in a sensful way?
Inj the report file it is said that the Drude-behaviour of my system is not recognized.
Did I make any mistake in the input file?
Here is my input file:
#
# ::: ::: ::: :::: :::: ::::::::: ::::::::
# :+: :+: :+: :+: +:+:+: :+:+:+ :+: :+: :+: :+:
# +:+ +:+ +:+ +:+ +:+ +:+:+ +:+ +:+ +:+ +:+ +:+
# +#++: +#++:++#++: +#+ +:+ +#+ +#++:++#+ +#+ +:+
# +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+
# #+# #+# #+# #+# #+# #+# #+# #+# #+#
# ### ### ### ### ### ######### ########
#
#
# GPL Version 4.0.1 Revision 88
# OpenMPI Build
# http://www.yambo-code.org
#
em1s # [R Xs] Static Inverse Dielectric Matrix
X_all_q_CPU= "4 2 2 1" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q k c v" # [PARALLEL] CPUs roles (q,k,c,v)
X_all_q_nCPU_invert=0 # [PARALLEL] CPUs for matrix inversion
Chimod= "hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc
% QpntsRXs
1 | 349 | # [Xs] Transferred momenta
%
% BndsRnXs
1 | 36 | # [Xs] Polarization function bands
%
NGsBlkXs= 300 RL # [Xs] Response block size
DrudeWXs= ( 0.80590 , 0.07439 ) eV # [Xd] Drude plasmon
% LongDrXs
1.000000 | 0.000000 | 0.000000 | # [Xs] [cc] Electric Field
%
Thanks and regards
Stephan
I try to calculate the screening and I want to preceed afterwards with a BSE-calculation.
I'm working on a cluster. Since the calculation needs lots of memory I need to use a fat node on which I can not use more than 16 procs.
The time limit for such calculations on the cluster is 3 days.
After these 3 days the calculation ends without beeing completed.
Is there any possibility to interrupt the run and restart it from the interruption point?
Or can anybody tell me how to reduce the computational cost in a sensful way?
Inj the report file it is said that the Drude-behaviour of my system is not recognized.
Did I make any mistake in the input file?
Here is my input file:
#
# ::: ::: ::: :::: :::: ::::::::: ::::::::
# :+: :+: :+: :+: +:+:+: :+:+:+ :+: :+: :+: :+:
# +:+ +:+ +:+ +:+ +:+ +:+:+ +:+ +:+ +:+ +:+ +:+
# +#++: +#++:++#++: +#+ +:+ +#+ +#++:++#+ +#+ +:+
# +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+ +#+
# #+# #+# #+# #+# #+# #+# #+# #+# #+#
# ### ### ### ### ### ######### ########
#
#
# GPL Version 4.0.1 Revision 88
# OpenMPI Build
# http://www.yambo-code.org
#
em1s # [R Xs] Static Inverse Dielectric Matrix
X_all_q_CPU= "4 2 2 1" # [PARALLEL] CPUs for each role
X_all_q_ROLEs= "q k c v" # [PARALLEL] CPUs roles (q,k,c,v)
X_all_q_nCPU_invert=0 # [PARALLEL] CPUs for matrix inversion
Chimod= "hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc
% QpntsRXs
1 | 349 | # [Xs] Transferred momenta
%
% BndsRnXs
1 | 36 | # [Xs] Polarization function bands
%
NGsBlkXs= 300 RL # [Xs] Response block size
DrudeWXs= ( 0.80590 , 0.07439 ) eV # [Xd] Drude plasmon
% LongDrXs
1.000000 | 0.000000 | 0.000000 | # [Xs] [cc] Electric Field
%
Thanks and regards
Stephan