Running in parallel
If you have compiled the gcm
program in its parallel configuration with MPI (option -parallel mpi
or -parallel mpi_omp
of makelmdz_fcm
), the programs distributes the horizontal domain to MPI processes along bands of latitudes. With two MPI processes, the optimal distribution is simply giving each process a hemisphere. For a larger number of MPI processes, and a given spatial resolution, the optimal distribution of latitude bands may change if you modify the source code, obviously, but also if you change machine, compiler or compilation options, and also if you change the execution parameters (set in the *.def
files). In order to find out what the optimal distribution is, first run the code for a few thousand time steps while setting adjust=y
in run.def
. The program then creates a file Bands_...prc.dat
. The name of the file will include, instead of the dots, the spatial resolution and the number of MPI processes, for instance Bands_96x72x19_4prc.dat
. this file contains the optimal distribution of latitudes for these settings. You can then run the gcm
, with adjust=n
in run.def
, and with the Bands_...prc.dat
file in the same directory.
When running in parallel, each MPI process creates its own hist*
files. You will thus obtain for instance a file histday.0001.nc
, which contains data relative to the latitude band managed by process number 1. To combine the output files from different processes into a single file containing the full dataset, use the IOIPSL rebuild
script. When installing IOIPSL, a modipsl
directory was created and the rebuild
script installed in modipsl/bin
. For each type of file, histday
, histmth
, etc., simply issue a command of the likes of:
rebuild -o histday.nc histday.000*