Dear @julienosman
I tried the otbApplicationLauncherCommandLine
because I identified in the OTB source code that the mpiConfig->Init()
is called by otbApplicationLauncherCommandLine.cxx. In this case, no MPI error is raised.
My understanding is that the MPI is not properly handled when using the python binding.
Can you confirm that the error is still in the SLIC module ? or is there something to fix in the OTB python binding ?
I installed the SLIC remote module following the official OTB documentation. I built it against an installed OTB on the CNES Hal cluster (OTB v7.2 - python v3.7.2) :
mkdir /Path/to/Module/build && cd /Path/to/Module/build
cmake -DOTB_DIR=/PathTo/OTB/install -DOTB_BUILD_MODULE_AS_STANDALONE=ON -DCMAKE_INSTALL_PREFIX=/theModulePath/install -DCMAKE_INSTALL_RPATH=/theModulePath/install/lib -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE ../
make install
I also updated the following variables:
export OTB_APPLICATION_PATH=/theModuleInstallFolder/lib:$OTB_APPLICATION_PATH
export PATH=/theModuleInstallFolder/bin:$PATH
When I ran the SLIC remote module, I systematically got an MPI error:
[queyruto@node123 DATA]$ python
Python 3.7.2 (default, Aug 14 2019, 14:00:35)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import otbApplication
>>> app = otbApplication.Registry.CreateApplication("SLIC")
>>> params = {"in": "SENTINEL2X_all.tif", "out": "SLIC.tif" }
>>> app.SetParameters(params)
>>> app.ExecuteAndWriteOutput()
2021-05-07 15:00:04 (INFO) SLIC: Default RAM limit for OTB is 256 MB
2021-05-07 15:00:04 (INFO) SLIC: GDAL maximum cache size is 0 MB
2021-05-07 15:00:04 (INFO) SLIC: OTB will use at most 24 threads
Auto tiling mode selected, available RAM = 128MB
Number of x tiles = 1
Number of y tiles = 1
Starting segmentation
*** The MPI_Barrier() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[node123.sis.cnes.fr:24702] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Follow the steps above.
Note: the OTB that is installed on the CNES HPC is compiled with the MPI support activated.
OS: Redhat 7 OTB: v7.2 with python 3.7.2 SLIC: clone from the master branch on 2021/05/10.