NVHPC - multiple definitions of main

Having trouble compiling the Yambo source? Using an unusual architecture? Problems with the "configure" script? Problems in GPU architectures? This is the place to look.

Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano, Conor Hogan, Nicola Spallanzani

Forum rules
If you have trouble compiling Yambo, please make sure to list:
(1) the compiler (vendor and release: e.g. intel 10.1)
(2) the architecture (e.g. 64-bit IBM SP5)
(3) if the problems occur compiling in serial/in parallel
(4) the version of Yambo (revision number/major release version)
(5) the relevant compiler error message
Post Reply
ruoshi
Posts: 5
Joined: Fri Feb 04, 2022 9:08 pm

NVHPC - multiple definitions of main

Post by ruoshi » Fri Feb 04, 2022 9:20 pm

We have NVHPC 21.9 and CUDA 11.4 on our CentOS 7.8 cluster. I'm using the following configure command for 5.0.3 according to the instructions:

Code: Select all

./configure FC=nvfortran F77=nvfortran  CC=nvc CPP="gcc -E -P" FPP="gfortran -E -P -cpp" --enable-open-mp --enable-par-linalg --enable-hdf5-par-io --enable-slepc-linalg --enable-cuda="cuda11.4,cc70"
I ran "make core". There were some bugs that I had to sort out, which are not critical (I can elaborate on that later). At the "Linking yambo" stage, I got this error:

Code: Select all

driver.o: In function `main':
/home/ejf5wk/yambo/5.0.3_rs7wz/lib/yambo/driver/src/driver/driver.c:36: multiple definition of `main'
/sfs/applications/202112_build/software/standard/core/nvhpc/21.9/Linux_x86_64/21.9/compilers/lib/f90main.o:nvcsdZ121AEfFQ7.ll:(.text+0x0): first defined here
/sfs/applications/202112_build/software/standard/core/nvhpc/21.9/Linux_x86_64/21.9/compilers/lib/f90main.o: In function `main':
nvcsdZ121AEfFQ7.ll:(.text+0x2f): undefined reference to `MAIN_'
pgacclnk: child process exit status 1: /bin/ld
make: *** [yambo] Error 2
I found two other posts on this issue from 2011 but they were using Intel compilers and -nofor-main is not recognized by nvc/fortran.

Thank you in advance for any suggestions.
Ruoshi Sun
Lead Scientist of Scientific Computing
Research Computing, University of Virginia

andrea.ferretti
Posts: 206
Joined: Fri Jan 31, 2014 11:13 am

Re: NVHPC - multiple definitions of main

Post by andrea.ferretti » Sat Feb 05, 2022 10:06 am

Dear Ruoshi,

in principle the compiler flags set by yambo configure should avoid this problem (multiple main...).
One should look carefully at the config.log file, but from what I see here, there is a mix of different compilers
(nvhpc for cc and fc, and gcc/gfortran in the precompilers), which I fear may confuse configure.

In the following some flags that I use to compiler with nvhpc:

Code: Select all

./configure \
 FC=nvfortran \
 F77=nvfortran \
 CPP="cpp -E" \
 FPP="nvfortran -Mpreprocess -E" \
 PFC=mpif90 \
 CC=nvcc \
  --with-blas-libs="-lblas" \
  --with-lapack-libs="-llapack" \
  --enable-cuda=cuda10.1,cc70 \
  --enable-open-mp \
  --enable-mpi \
  --enable-time-profile \
  --enable-memory-profile \
  --enable-msgs-comps
take care
Andrea
Andrea Ferretti, PhD
CNR-NANO-S3 and MaX Centre
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322; Skype: andrea_ferretti
URL: http://www.nano.cnr.it

ruoshi
Posts: 5
Joined: Fri Feb 04, 2022 9:08 pm

Re: NVHPC - multiple definitions of main

Post by ruoshi » Sat Feb 05, 2022 1:31 pm

Hi Andrea,

Thank you for providing the new configure command. Perhaps it can be incorporated into https://www.yambo-code.org/wiki/index.p ... n_compiler?

The bad news is I still ended with the same multiple main error. I decompressed the source to a new directory and started from scratch. Here's the summary after running configure:

Code: Select all

# [VER] 5.0.3 r.19584
# 
# - GENERAL CONFIGURATIONS -
# 
# [SYS] linux@x86_64
# [SRC] /home/ejf5wk/yambo/5.0.3_GPU
# [BRN] 
# [CMP] /home/ejf5wk/yambo/5.0.3_GPU
# [TGT] /home/ejf5wk/yambo/5.0.3_GPU
# [BIN] /home/ejf5wk/yambo/5.0.3_GPU/bin
# [LIB] /home/ejf5wk/yambo/5.0.3_GPU/lib/external
#
# [EDITOR] vim
# [ MAKE ] make
#
# [-] Double precision
# [X] Keep object files
# [X] Run-Time timing profile 
# [X] Run-Time memory profile 
#
# - SCRIPTS -
#
# [-] YDB: Yambo DataBase
# [-] YAMBOpy: Yambo Python scripts
# 
# - PARALLEL SUPPORT -
#
# [X] MPI
# [X] OpenMP
# 
# - LIBRARIES [E=external library; I?=internal library (c=to be compiled / f=found already compiled); X=system default; -=not used;] -
#
# > I/O: (NETCDF with large files support) 
#
# [ - ] FUTILE  :  
#                  
# [ - ] YAML    :  
#                  
# [ Ic] IOTK    : /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/lib/libiotk.a (QE hdf5-support)
#                 -I/home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/include/
# [ - ] ETSF_IO :  
#                  
# [ Ic] NETCDF  : /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/lib/libnetcdff.a /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/lib/libnetcdf.a
#                 -I/home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/include -I/home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/include
# [ Ic] HDF5    : /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/lib/libhdf5hl_fortran.a /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/lib/libhdf5_fortran.a /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/lib/libhdf5_hl.a /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/lib/libhdf5.a -lz -lm -ldl -lcurl
#                 -I/home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/v4/serial/include
#
# > MATH: (Internal FFTW3) 
#
# [ Ic] FFT       : /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/lib/libfftw3.a
#                   -I/home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/include/
# [ E ] BLAS      : -lblas
# [ E ] LAPACK    : -llapack
# [ - ] SCALAPACK :  
# [ - ] BLACS     : 
# [ - ] PETSC     :  
#                    
# [ - ] SLEPC     :  
#                    
#
# > OTHERs
#
# [ Ic] LibXC     : /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/lib/libxcf90.a /home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/lib/libxc.a
#                   -I/home/ejf5wk/yambo/5.0.3_GPU/lib/external/unknown/mpifort/include
# [ E ] MPI       : -L/apps/software/standard/compiler/nvhpc/21.9/openmpi/3.1.6/lib -lmpi 
#                   -I/apps/software/standard/compiler/nvhpc/21.9/openmpi/3.1.6/include 
# [ Ic] Ydriver   : 1.0.0
#
# - COMPILERS -
#
# FC kind = unknown 
# MPI kind= Open MPI v3.1.6, package: Open MPI uvacse@udc-ba38-31c9 Distribution, ident: 3.1.6, repo rev: v3.1.6, Mar 18, 2020
#
# [ CPP ] cpp -E -traditional -D_HDF5_LIB -D_HDF5_IO -D_MPI -D_FFTW     -D_OPENMP -D_TIMING   -D_CUDA  -D_P2Y_QEXSD_HDF5
# [ FPP ] nvfortran -Mpreprocess -E -D_HDF5_LIB -D_HDF5_IO -D_MPI -D_FFTW     -D_OPENMP -D_TIMING   -D_CUDA 
# [ CC  ] mpicc -O2 -D_C_US -D_FORTRAN_US
# [ FC  ] mpifort -g -O   -Mcuda=cuda11.4,cc70 -Mcudalib=cufft,cublas,cusolver
# [ FCUF] -O0 -Mcuda=cuda11.4,cc70 -Mcudalib=cufft,cublas,cusolver
# [ F77 ] mpifort -g -O -Mcuda=cuda11.4,cc70 -Mcudalib=cufft,cublas,cusolver
# [ F77U] -O0 -Mcuda=cuda11.4,cc70 -Mcudalib=cufft,cublas,cusolver
# [Cmain]
There were a few "bugs" that I encountered during the build process:
  • For src/modules and src/Yio, I had to add -Mbackslash to fcflags in the Makefile. Otherwise I would get the error "NVFORTRAN-S-0026-Unmatched quote".
  • In src/modules/mod_functions.F, the function "isnan" is not recognized. I had to copy the ieee version from the if block to the else block (which defeats the purpose of having an if-else). Would this cause any problems at runtime?
I don't think these are related to the multiple main error, but I wanted to report all the changes I made, just in case.
Ruoshi Sun
Lead Scientist of Scientific Computing
Research Computing, University of Virginia

andrea.ferretti
Posts: 206
Joined: Fri Jan 31, 2014 11:13 am

Re: NVHPC - multiple definitions of main

Post by andrea.ferretti » Sat Feb 05, 2022 3:14 pm

Dear Ruoshi,

thanks for reporting.
* the problem with isnan and backslash are known (and fixed either in later releases, eg 5.0.4 or in the develop version).
Here the point is that the compiler is not recognised as pgi, so that the -D_PGI macro is not set (and neither is recognised as NVIDIA)..
This is make automatic in the fix...
* Also, I think that, due to the unrecognised environment/compiler, some relevant flags, eg to avoid double main problems, are not set (below you find some examples)...
* if you need to modify some compiler flags, you can do it once in config/setup, while macros can be added/modified here (I think)
config/mk/global/defs.mk

Here are the flags automatically detected in my case:

Code: Select all

#
# FC kind = nvfortran nvfortran 20.9-0 LLVM 64-bit target on x86-64 Linux -tp skylake 
# MPI kind= Open MPI v3.1.5, package: Open MPI qa@build-lin64 Distribution, ident: 3.1.5, repo rev: v3.1.5, Nov 15, 2019
#
# [ CPP ] cpp -E -traditional -D_HDF5_LIB -D_HDF5_IO -D_MPI -D_FFTW   -D_PGI  -D_OPENMP -D_TIMING   -D_CUDA  -D_P2Y_QEXSD_HDF5
# [ FPP ] nvfortran -Mpreprocess -E -D_HDF5_LIB -D_HDF5_IO -D_MPI -D_FFTW   -D_PGI  -D_OPENMP -D_TIMING   -D_CUDA 
# [ CC  ] mpicc -O2 -D_C_US -D_FORTRAN_US
# [ FC  ] mpifort -O1 -gopt -Mnoframe -Mdalign -Mbackslash -cpp  -mp -Mcuda=cuda10.1,cc70,nollvm -Mcudalib=cufft,cublas,cusolver
# [ FCUF] -O0 -g -Mbackslash -cpp -Mcuda=cuda10.1,cc70,nollvm -Mcudalib=cufft,cublas,cusolver
# [ F77 ] mpifort -O1 -gopt -Mnoframe -Mdalign -Mbackslash -cpp -Mcuda=cuda10.1,cc70,nollvm -Mcudalib=cufft,cublas,cusolver
# [ F77U] -O0 -g -Mbackslash -cpp -Mcuda=cuda10.1,cc70,nollvm -Mcudalib=cufft,cublas,cusolver
# [Cmain] -Mnomain
#
Andrea
Andrea Ferretti, PhD
CNR-NANO-S3 and MaX Centre
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322; Skype: andrea_ferretti
URL: http://www.nano.cnr.it

ruoshi
Posts: 5
Joined: Fri Feb 04, 2022 9:08 pm

Re: NVHPC - multiple definitions of main

Post by ruoshi » Mon Feb 07, 2022 2:28 am

Thank you Andrea! 5.0.4 went smoothly. I didn't have to modify any files.
Ruoshi Sun
Lead Scientist of Scientific Computing
Research Computing, University of Virginia

Post Reply