I think I have focussed the problem of netcdf with rev 14. By increasing the number of k-points, files get larger and larger. I found this error during (actually at the end of) the setup run:
Code: Select all
 <03h-20m-02s> SE indexes |###############     | [075%] 05m-33s(E) 07m-24s(X)
 <03h-20m-24s> SE indexes |################    | [080%] 05m-55s(E) 07m-24s(X)
 <03h-20m-47s> SE indexes |#################   | [085%] 06m-18s(E) 07m-25s(X)
 <03h-21m-11s> SE indexes |##################  | [090%] 06m-42s(E) 07m-26s(X)
 <03h-21m-34s> SE indexes |################### | [095%] 07m-05s(E) 07m-27s(X)
 <03h-21m-57s> SE indexes |####################| [100%] 07m-28s(E) 07m-28s(X)
[ERROR] STOP signal received while in :[03] Transferred momenta grid
[ERROR][NetCDF] NetCDF: One or more variable sizes violate format constraints
I put a lot of prints to analize the origin of the error and I found that YAMBO (rev14) correctly ends all loops required by the setup run without errors.
But when I tried to use tools provided by netcdf I discovered that:
Code: Select all
-bash-3.2$ ncdump -k ndb.kindx 
classic
-bash-3.2$ od -An -c -N4 ndb.kindx 
           C   D   F 001
Note that, instead, I compiled rev14 with --enable-largedb as you can see from this part of my config.log:
Code: Select all
(...)
enable_debug='yes'
enable_dp='no'
enable_largedb='yes'
(...)
Code: Select all
(...)
#if defined _NETCDF_IO
         !
         ! Setting NF90_64BIT_OFFSET causes netCDF to create a 64-bit
         ! offset format file, instead of a netCDF classic format file.
         ! The 64-bit offset format imposes far fewer restrictions on very large
         ! (i.e. over 2 GB) data files. See Large File Support.
         !
         ! http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Large-File-Support.html
         ! http://www.unidata.ucar.edu/software/netcdf/faq-lfs.html
         !
         CREATE_MODE=nf90_share
         if ( present(ENABLE_LARGE_FILE)) CREATE_MODE=ior(nf90_share,nf90_64bit_offset)
         if ( present(ENABLE_LARGE_FILE)) then
           print*,'present ENABLE_LARGE_FILE', ENABLE_LARGE_FILE, desc
         else
           print*, 'not present ENABLE_LARGE_FILE'
         endif
         !
         if ( (io_action(ID)==OP_APP_WR_CL.or.io_action(ID)==OP_APP) ) then
           !
           if( file_exists(trim(io_file(ID))) ) then
             call netcdf_call(nf90_open(trim(io_file(ID)),&
&                             ior(nf90_write,nf90_share),io_unit(ID)))
           else
             call netcdf_call(nf90_create(trim(io_file(ID)),CREATE_MODE,io_unit(ID)))
             call netcdf_call(nf90_enddef(io_unit(ID)))
             if (io_action(ID)==OP_APP_WR_CL) io_action(ID)=OP_WR_CL
             if (io_action(ID)==OP_APP) io_action(ID)=OP_WR
           endif
           !
         else
           !
           call netcdf_call(nf90_create(trim(io_file(ID)),CREATE_MODE,io_unit(ID)))
           call netcdf_call(nf90_enddef(io_unit(ID)))
           !
         endif
#endif
(...)
Code: Select all
 ioQINDX=io_connect(desc='kindx',type=1,ID=io_db)
Hope to have been clear and to be useful to let you resolve the problem.
Cheers!
Marco