Tutorials

From The Yambo Project
Jump to navigation Jump to search

Tutorial files

To follow the tutorials, you must first download or copy data files for each system. Files are distributed as gzipped tarballs. Always extract the tarballs in the same place.
Available systems are: hBN.tar.gz, hBN-2D.tar.gz. You will need both hBN and hBN-2D tarballs.

Instructions for CECAM students

The tutorials will be run on the CECAM linux cluster. 
  • If connecting from the CECAM iMac, your username is indicated on the terminal (tutoXY).

Standard tutorials: cecampc4 cluster

Log into the cluster via:

ssh -Y tutoXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

Next you must log into the linux cluster directly, using the node node0RS that is associated with the username (link), and set up the tutorial as follows:

$ ssh -Y node0RS 
$ pwd
/nfs_home/tutoXY
$ which pw.x yambo
/nfs_home/tutoadmin/bin/pw.x
/nfs_home/tutoadmin/bin/yambo
$ cd /home/scratch/                 (NB: do not run on the /nfs_home partition!)
$ mkdir yambo_YOUR_NAME             (there are more participants than accounts!)
$ cd yambo_YOUR_NAME
$ cp /nfs_home/tutoadmin/yambo-2017/tutorials/hBN.tar.gz .
$ cp /nfs_home/tutoadmin/yambo-2017/tutorials/hBN-2D.tar.gz  .
$ tar -zxvf hBN.tar.gz 
$ tar -zxvf hBN-2D.tar.gz   
$ ls 
YAMBO_TUTORIALS

If you used "ssh -Y", X-forwarding, for plotting with gnuplot, should work. If not, try set DISPLAY:0.0 on your local machine; it might also help to keep one terminal open for plotting and the other for running codes. If all else fails, try the cool gnuplot trick gnuplot> set terminal dumb.

Parallel tutorial: bellatrix cluster

This cluster is equipped with 16-core nodes based on Intel processors. A tutorial-dedicated queue (cecam_course) allows participants to access up to 20 nodes.

First log into the cecam4 cluster via:

$ ssh -Y tutoXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

Next move into the bellatrix cluster via:

$ ssh -Y cecam.schoolXY@bellatrix.epfl.ch
$ cd /scratch/${USER}                 (NB: do not run in the /home folder!)
$ mkdir yambo_YOUR_NAME               (there are more participants than accounts!)
$ cd yambo_YOUR_NAME

replacing XY with the appropriate number.

This cluster is equipped with 16-core nodes based on Intel processors.

The Unix intel environment can be obtained by loading the following modules:

module purge
module load intel/16.0.3
module load intelmpi/5.1.3
module load python

A tutorial-dedicated queue (cecam_course) allows participants to access up to 20 nodes. In order to submit to this queue you will use a submission script run.sh you will find in the tarball provided for the tutorials.

Full tutorials

If you are starting out with Yambo, or even an experienced user, we recommend that you complete the following tutorials before trying to use Yambo for your system. Each tutorial is fairly standalone, although some require that you have completed previous ones.

Day 1: Introduction

Day 2: Quasiparticles in the GW approximation

Day 3: Using Yambo in Parallel

  • Parallel GW: strategies for running Yambo in parallel
  • GW convergence: use Yambo in parallel to converge a GW calculation for a layer of hBN (hBN-2D)

Day 4: Excitons and the Bethe-Salpeter Equation

Day 5: Yambo-python driver

Modules

An alternative way to learn Yambo is through a more detailed look at our documentation modules. These provide a focus on the input parameters, run time behaviour, and underlying physics behind each yambo task or runlevel. Although they can be followed separately, they are better followed as part of the more structured tutorials given above.