Convergence and disk space of BSE calculation
Posted: Wed May 25, 2016 3:06 pm
Hello,
I am working at the moment on BSE calculations of the optical properties of monolayer black phosphorus, based on pwscf calculations. I did the scf and nscf needed (with a grid of 20*20), converted with p2y without problems. That gave me about 600,000 Gvecs that I reduced to 10,000 when I did the initialization. I have 121 Q-points.
Attached (BSE.in/txt) is the input I obtained with the command : yambo -o b -k sex -y h. I reduced some parameters (BSENGexx, BSENGBlk, and block size) in order to do a first run, not precise but fast.
This is where I ran into the first problems. Even with these very reduced parameters, the calculation seems to need huge computing power and RAM. I had to run it on 16 nodes, 16 cores each and 16G of ram per core. If I used less, it would take an entire week. And this is only the first non precise run. Max job time on my cluster is about 1 week.
(BTW, I also did GW calculations earlier, and they needed much less resources.)
Is it normal ? Do I have to explicitely add parameters to optimize parallelization ? Are my parameters still too big ?
So, I launched the calculation with 16*16 and it ran quite well until about halfway, when I saw the save took about 600 GB of disk space. I had to abort it.
This is a problem for me as I work on a cluster I share with other people and I only have about 1 TB of disk space I can use. I already use about 300 GB for other pwscf and GW.
Is there a way to reduce the disk space used ? How much disk space does a full precision BSE calculation need ? (by full precision I mean good enough to have reliable results)
I can attach other logs and files if needed
Thanks in advance
I am working at the moment on BSE calculations of the optical properties of monolayer black phosphorus, based on pwscf calculations. I did the scf and nscf needed (with a grid of 20*20), converted with p2y without problems. That gave me about 600,000 Gvecs that I reduced to 10,000 when I did the initialization. I have 121 Q-points.
Attached (BSE.in/txt) is the input I obtained with the command : yambo -o b -k sex -y h. I reduced some parameters (BSENGexx, BSENGBlk, and block size) in order to do a first run, not precise but fast.
This is where I ran into the first problems. Even with these very reduced parameters, the calculation seems to need huge computing power and RAM. I had to run it on 16 nodes, 16 cores each and 16G of ram per core. If I used less, it would take an entire week. And this is only the first non precise run. Max job time on my cluster is about 1 week.
(BTW, I also did GW calculations earlier, and they needed much less resources.)
Is it normal ? Do I have to explicitely add parameters to optimize parallelization ? Are my parameters still too big ?
So, I launched the calculation with 16*16 and it ran quite well until about halfway, when I saw the save took about 600 GB of disk space. I had to abort it.
This is a problem for me as I work on a cluster I share with other people and I only have about 1 TB of disk space I can use. I already use about 300 GB for other pwscf and GW.
Is there a way to reduce the disk space used ? How much disk space does a full precision BSE calculation need ? (by full precision I mean good enough to have reliable results)
I can attach other logs and files if needed
Thanks in advance