Compilation error Yambo 5.3 [GPU support]

Having trouble compiling the Yambo source? Using an unusual architecture? Problems with the "configure" script? Problems in GPU architectures? This is the place to look.

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan, Nicola Spallanzani

Forum rules
If you have trouble compiling Yambo, please make sure to list:
(1) the compiler (vendor and release: e.g. intel 10.1)
(2) the architecture (e.g. 64-bit IBM SP5)
(3) if the problems occur compiling in serial/in parallel
(4) the version of Yambo (revision number/major release version)
(5) the relevant compiler error message
Post Reply
harrier_class
Posts: 1
Joined: Tue May 13, 2025 4:27 pm

Compilation error Yambo 5.3 [GPU support]

Post by harrier_class » Thu Mar 19, 2026 5:03 pm

Dear all,

I am trying to configure and compile Yambo 5.3 on the GPU-cluster Alex HPC [FAU, Erlangen].

(1) Compiler:
FC=nvfortran
F77=nvfortran
CC=gcc
MPIFC=mpifort
MPIF77=mpifort
MPICC=mpicc


(2) Architecture:
- GPU: NVIDIA A100 (Ampere, compute capability 8.0, 40 GB)
- CPU: AMD EPYC 7713 (Zen3)

(3) if the problems occur compiling in serial/in parallel
serial [didnot try parallel], so using make all

(4) the version of Yambo (revision number/major release version)
5.3 [downloaded the source code from the official git repo of Yambo and changed the branch 5.3 using git branch -a, git checkout 5.3

Further, I would like to share that the compilation was performed on the a100 compute node

Code: Select all

module purge
module load nvhpc/21.11
module load openmpi/4.1.2-nvhpc21.11-cuda
module load mkl/2023.2.0
module load hdf5/1.10.7-nvhpc21.11


export NVHPC_CUDA_HOME=$NVHPC_ROOT/Linux_x86_64/21.11/cuda/11.5
export CUDA_HOME=$NVHPC_CUDA_HOME
export PATH=$NVHPC_CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$NVHPC_CUDA_HOME/lib64:$LD_LIBRARY_PATH

export FFLAGS="-cuda -gpu=cc80,cuda11.5 -cudalib=cufft,cublas,cusolver"
export FCFLAGS="-cuda -gpu=cc80,cuda11.5 -cudalib=cufft,cublas,cusolver"
MKL_LIBS="-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lpthread -lm -ldl"

make distclean || true
rm -f config.cache
rm -rf lib/external config/setup config/report log
unset DEVXLIB_LIBS DEVXLIB_INCS DEVXLIB_PATH

./configure \
    FC=nvfortran \
    F77=nvfortran \
    CC=gcc \
    MPIFC=mpifort \
    MPIF77=mpifort \
    MPICC=mpicc \
    CPP="cpp -E -P" \
    FPP="nvfortran -Mpreprocess -E" \
    FFLAGS="$FFLAGS" \
    FCFLAGS="$FCFLAGS" \
    --with-blas-libs="$MKL_LIBS" \
    --with-lapack-libs="$MKL_LIBS" \
    --with-hdf5-libs="-lhdf5_fortran -lhdf5" \
    --with-cuda-path="$NVHPC_CUDA_HOME" \
    --with-cuda-cc=80 \
    --with-cuda-runtime=11.5 \
    --enable-cuda-fortran \
    --enable-mpi \
    --enable-open-mp \
    --enable-time-profile \
    --enable-memory-profile
    
 make all 2>&1 | tee build_yambo.log   
Also, earlier, I was getting error like this

Code: Select all

NVFORTRAN-F-0004-Unable to open MODULE file devxlib_environment.mod (mod_gpu.f90: 12)
So, I fixed it by updating the following in the lib/devxlib/Makefile.loc

Code: Select all

CONFFLAGS=--prefix=$(LIBPATH) $(devxlib_flgs) --enable-cuda-env-check=no \           
           --with-blas-libs="$(BLAS_LIBS)" --with-lapack-libs="$(LAPACK_LIBS)" \
           F90="$(fc)" MPIF90="$(fc)" 

(5) the relevant compiler error message

However, now , after the compilation, I get this error

Code: Select all

log/config_netcdf-fortran-4.6.0.log:125:checking dynamic linker characteristics... nvc-Error-Unknown switch: -print-search-dirs
log/compile_netcdf-fortran-4.6.0.log:145:/usr/bin/sed "s|implicit none|USE tests|" nf_error.F > nf03_error.F.tmp
log/compile_netcdf-fortran-4.6.0.log:146:/usr/bin/sed "s|#include \"tests.inc\"|      implicit none|" nf03_error.F.tmp > nf03_error.F
log/config_libxc-5.2.3.log:2:/bin/sh: ./configure: No such file or directory
log/config_fftw-3.3.10.log:75:checking dynamic linker characteristics... nvc-Error-Unknown switch: -print-search-dirs
log/config_devicexlib-0.8.5.log:6:configure: error: /bin/sh ./config/config.sub   failed
log/compile_devicexlib-0.8.5.log:7:make[5]: *** [Makefile:74: libsrc] Error 2
log/install_devicexlib-0.8.5.log:7:make[5]: *** [Makefile:74: libsrc] Error 2
log/compile_qe_pseudo.log:118:ar -r libqe_pseudo.a qe_errore.o
Further, I am sharing some snippet from the log files

build_yambo.log

Code: Select all

\t[src/coulomb] coulomb (setup)
\t[src/interpolate] interpolate (setup)
\t[src/qp_control] qp_control (setup)
\t[src/setup] setup (setup)
\t[src/tddft] tddft (setup)
\t[src/dipoles] dipoles (setup)
\t[src/pol_function] pol_function (setup)
\t[src/qp] qp (setup)
\t[src/acfdt] acfdt (setup)
\t[src/bse] bse (setup)
\t[src/driver] driver
\t[src/driver] command_line
\t[src/driver] get_libraries
\t[src/driver] get_runlevel
\t[src/driver] get_running_project
\t[src/driver] get_running_tool
\t[src/driver] get_version
\t[src/driver] mod_C_driver
\t[src/driver] C_driver_transfer
\t[src/driver] input_file
\t[src/driver] launcher
\t[src/driver] load_environments
\t[src/driver] options_control
\t[src/driver] options_help
\t[src/driver] options_interfaces
\t[src/driver] options_maker
\t[src/driver] options_projects
\t[src/driver] options_yambo
\t[src/driver] options_ypp
\t[src/driver] title
\t[src/driver] tool_init
\t[src/driver] usage
\t[src/driver] use_me
\t[src/driver] winsize
\t[src/driver] lib_Y_driver.a (lib)
\t[src/tools] ct_cptimer
\t[src/tools] c_printing
\t[src/tools] io
\t[src/tools] stack
\t[src/tools] memstat
\t[src/tools] lib_Y_tools.a (lib)
\t[src/modules] mod_pars
\t[src/modules] mod_units
\t[src/modules] mod_lexical_sort
\t[src/modules] mod_stderr
\t[src/modules] mod_openmp
\t[src/modules] mod_memory
\t[src/modules] mod_parallel
\t[src/modules] mod_descriptors
\t[src/modules] mod_cufft
\t[src/modules] mod_cudafor
\t[src/modules] mod_cusolverdn_y
\t[src/modules] mod_hip
\t[src/modules] mod_hipfft
make[2]: *** [/home/woody/xyz/software/yambo_5.3_alex_gpu/config/mk/local/rules.mk:15: mod_gpu.o] Error 2
\t[driver] yambo (setup)
yambo linking failed. Check log/compile_yambo.log
make[1]: *** [config/mk/global/actions/compile_yambo.mk:43: yambo] Error 1
yambo build failed
make: *** [Makefile:54: all] Error 1

from config_devicexlib-0.8.5.log

Code: Select all

./config/configure --prefix=/home/woody/bccc/bccc128h/software/yambo_5.3_alex_gpu/lib/external/nvfortran/mpifort/cudaf --enable-openmp --enable-cuda-fortran --with-cuda-cc=80 --with-cuda-runtime=11.5 --enable-cuda-env-check=no \           
configure: WARNING: you should use --build, --host, --target
configure: WARNING: invalid host type:  
checking build system type... config.sub: missing argument
Try `config.sub --help' for more information.
configure: error: /bin/sh ./config/config.sub   failed
from log/config_libxc-5.2.3.log

Code: Select all

./configure --prefix=/home/woody/bccc/bccc128h/software/yambo_5.3_alex_gpu/lib/external/nvfortran/mpifort --libdir=/home/woody/bccc/bccc128h/software/yambo_5.3_alex_gpu/lib/external/nvfortran/mpifort/lib CC=mpicc CFLAGS=-O2 -D_C_US -D_FORTRAN_US  FC=mpifort CPP=cpp -E -P FCCPP=nvfortran -Mpreprocess -E
/bin/sh: ./configure: No such file or directory
[\code]

from log/config_netcdf-fortran-4.6.0.log

[code]
checking for stdint.h... yes
checking for strings.h... yes
checking for sys/stat.h... yes
checking for sys/types.h... yes
checking for unistd.h... yes
checking for sys/time.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if mpicc supports -fno-rtti -fno-exceptions... no
checking for mpicc option to produce PIC... -fPIC -DPIC
checking if mpicc PIC flag -fPIC -DPIC works... yes
checking if mpicc static flag -static works... no
checking if mpicc supports -c -o file.o... yes
checking if mpicc supports -c -o file.o... (cached) yes
checking whether the mpicc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking dynamic linker characteristics... nvc-Error-Unknown switch: -print-search-dirs
GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes
checking for mpifort option to produce PIC... 
checking if mpifort static flag  works... yes
checking if mpifort supports -c -o file.o... yes
checking if mpifort supports -c -o file.o... (cached) yes
checking whether the mpifort linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking dynamic linker characteristics... (cached) GNU/Linux ld.so

from log/config_fftw-3.3.10.log

Code: Select all

checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc supports -fno-rtti -fno-exceptions... no
checking for /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc option to produce PIC... -fPIC -DPIC
checking if /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc PIC flag -fPIC -DPIC works... yes
checking if /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc static flag -static works... no
checking if /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc supports -c -o file.o... yes
checking if /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc supports -c -o file.o... (cached) yes
checking whether the /apps/SPACK/0.17.0/opt/linux-almalinux8-zen/gcc-8.4.1/nvhpc-21.11-hm4rmtk5cylmeavh7qrk6uc36t7lfxbv/Linux_x86_64/21.11/compilers/bin/nvc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking dynamic linker characteristics... nvc-Error-Unknown switch: -print-search-dirs
GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes
checking for ranlib... (cached) ranlib
checking for ocamlbuild... no
checking whether C compiler accepts -mtune=native... yes
checking whether C compiler accepts -malign-double... no
checking whether C compiler accepts -fstrict-aliasing... yes
checking whether C compiler accepts -fno-schedule-insns... no
checking whether C compiler accepts -O3 -fomit-frame-pointer -mtune=native -fstrict-aliasing... no

Can you help me fix these issues?

--
Thanks and best regards,
Vipul Kumar Ambasta
MSc. student
Friedrich Alexander Universitaet
Erlangen, Germany
Vipul Kumar Ambasta
MSc. student
Friedrich Alexander Universitaet
Erlangen (Germany)

User avatar
Nicola Spallanzani
Posts: 103
Joined: Thu Nov 21, 2019 10:15 am

Re: Compilation error Yambo 5.3 [GPU support]

Post by Nicola Spallanzani » Mon Mar 23, 2026 9:57 am

Dear Vipul,
I recommend minimizing environment variable definitions.
Also note that you were using the gcc compiler instead of nvc.
Try this setup:

Code: Select all

module purge
module load nvhpc/21.11
module load openmpi/4.1.2-nvhpc21.11-cuda
module load mkl/2023.2.0
module load hdf5/1.10.7-nvhpc21.11

MKL_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl"

make distclean

./configure \
    FC=nvfortran \
    F77=nvfortran \
    CC=nvc \
    MPIFC=mpifort \
    MPIF77=mpifort \
    MPICC=mpicc \
    CPP="cpp -E -P" \
    FPP="nvfortran -Mpreprocess -E" \
    --with-blas-libs="$MKL_LIBS" \
    --with-lapack-libs="$MKL_LIBS" \
    --with-hdf5-path=/path/to/the/library/installation/directory \
    --with-cuda-cc=80 \
    --with-cuda-runtime=11.5 \
    --enable-cuda-fortran \
    --enable-mpi \
    --enable-open-mp \
    --enable-time-profile \
    --enable-memory-profile \
    --with-extlibs-path=/path/to/where/you/want/to/install/the/libraries
 
Please note that in the configure line I am providing you there are two paths to insert.

Additionally, to compile libxc, you need the autotools software suite: autoconf, automake, and libtools. So, if they're not available on your compute node, you might find them as modules to load.

Eventually, to avoid having problems with CUDA recognition in devicexlib edit the file "libs/devxlib/Makefile" like this:

Code: Select all

CONFFLAGS=--prefix=$(LIBPATH) $(devxlib_flgs) --enable-cuda-env-check=no \
          --with-blas-libs="$(BLAS_LIBS)" --with-lapack-libs="$(LAPACK_LIBS)" \
          F90="$(fc)" MPIF90="$(fc)" 
Best regards,
Nicola
Nicola Spallanzani, PhD
S3 Centre, Istituto Nanoscienze CNR and MaX Center, Italy
MaX - Materials design at the Exascale
http://www.nano.cnr.it
http://www.max-centre.eu

Post Reply