## CCLM compiled with IntelMPI

Hi all,

CCLM was working fine using openMPI. However, I started having problems due to some missing libraries of OpenMPI in the computer cluster. While this getting fixed, I have used intelMPI to compile CCLM model but having problems during execution. Has anyone in the community been able to use intelMPI with the CCLM model? I would guess that yes and shouldn’t be a problem but I am not sure why it doesn’t work here. Following is the error message.

Fatal error in MPI_Send: Other MPI error, error stack:
MPI_Send(186): MPI_Send(buf=0x298eff0, count=5, MPI_INTEGER, dest=0, tag=6665, comm=0x84000003) failed
MPID_Send(52): DEADLOCK: attempting to send a message to the local process without a prior matching receive


Cecille

### Replies (7)

#### RE: CCLM compiled with IntelMPI - Added by Burkhardt Rockelabout 3 years ago

Your question regarding intelMPI has not been send due to RedC Email problems.
I hope it works now and someone of the CLM-Community has already tested intelMPI. I have not, sorry!

#### RE: CCLM compiled with IntelMPI - Added by Hans-Juergen Panitzabout 3 years ago

Which “computer cluster”?
I “played” a little bit with the INTELMPI at the new machine “MISTRAL” at DKRZ in Hamburg.
It worked (however, I decided for myself to use the BULLMPI).
I don’t have any experience with INTELMPI on other machines.

Hans-Jürgen

#### RE: CCLM compiled with IntelMPI - Added by Andrew Ferrone about 3 years ago

Hi Hans-Jürgen

Thanks for sharing your experiences with using INTELMPI at Mistral.

I guess that you were using COSMO 5.0 for performing these tests?

Regards

Andrew

#### RE: CCLM compiled with IntelMPI - Added by Hans-Juergen Panitzabout 3 years ago

Hi Andrew,

correct. I used COMSO_5.0.
I reported on these tests during the Assembly in Luxmebourg.
My recommendation was to use the BULL-MPI since it was somewhat faster than the INTEL-MPI, especially when using larger number of nodes (> 25 to 30).
Nevertheless, INTEL-MPI worked.

Of course, in the meantime I also applied cosmo_4.8_clm17/19 on MISTRAL (sucessfully), but only with BULL-MPI (I myself decided to stick on BULL-MPI).

Hans-Jürgen

#### RE: CCLM compiled with IntelMPI - Added by Cecille Villanueva-Birriel about 3 years ago

Thank you Hans-Jürgen and Andrew for your reply. I am running cosmo4.8_clm17 in Tier-1 supercomputer at Cenaero. However, you only used BULL-MPI for this model version and did not tested IntelMPI. I would guess that if it worked with COSMO5.0 then it should work as well for previous model versions. But I would like to hear from other model users if previous model versions were able to work with IntelMPI. From what I know so far, BULL-MPI is not available in this supercomputer. Thank you again.

Cecille

#### RE: CCLM compiled with IntelMPI - Added by Reinaldo Silveiraabout 3 years ago

Hello Cecille,

I am using intel MPI since older version of COSMO up to 5.01 and never had a problem. I am also have openMPI installed in the machine I use (Dell Power EDGE X64, 652 CPUs) and I also have used
it without a problem. So, yes Intel MPI can run any version of COSMO. Now, looking into your problem, I believe it is a matter of setting the environment variable correctly. But, it all depends
of which version of Intel MPI are you using, since some versions can allow more compatibility with your older openMPI env than others. Buffering, can be one of such incompatibility and then execution is
halted with error if the comm library doesn’t recon the sending. Usually, “export MPI_COMPATIBILITY=#” , before mpiifort or mpirun, will make it through, where # is either 3 or 4, depending
on the intel MPI version. Probably, you need to start over with the right environment for intel (you might have a look on WRF’s forum to see what comes out from .configure for intel).
Best wishes,
Reinaldo

#### RE: CCLM compiled with IntelMPI - Added by Reinaldo Silveiraabout 3 years ago

Hello again Cecille,
I was looking at my previous comment and it might be confusing. In fact, you will need to tell the loader which mpirun are you using,
before running the scripts, otherwise it may look at the wrong environment variables. For example, in my case there is a small script in the
/opt/intel/impi/4.1.1/bin64/mpivars.sh that set the I_MPI_ROOT and then the other variables and the libraries. They can be compatible
with openMPI variables, which doesn’t look like in your case, and nothing needs to be done. But, when it is not the case, you need to
load such script (eg. “source /opt/intel/impi/4.1.1/bin64/mpivars.sh”) as well as any other related library. Exporting MPI_COMPATIBILIY might
not work.
Reinaldo

(1-7/7)