Request for Comments-26: MPI support
Contents
[Request for Comments - 26] MPI support
Status
- Author: Rémi Cresson
- Submitted on 09.02.2016
- Open for comments
Content
What changes will be made and why they would make a better Orfeo ToolBox?
Tired about waiting your Bayesian fusion for hours? Sleeping at work because your texture features are not ready?
Proposed is to introduce the Message Passing Interface (MPI) support in OTB. The goal is to provide components to ease the development of parallel processing pipelines on HPC architectures with OTB, and to bring parallel applications which can deploy on multiple processing nodes.
Basically, a typical OTB application runs from a shell using: otbcli_MyNicePipeline -myparams params What we propose is that, when this is possible, the same app could be ran on N processing nodes using a shell on the frontal (or an other job submission procedure) using: mpirun -np $N otbcli_MyNicePipeline -myparams params
A key component for this achievement is the MPIImageFileWriter, which I already have partially developed here at IRSTEA (actually mine is more a MPIGeoTiffImageFileWriter, because it works only on GeoTiffs!). Another key component is a MPIApplication wrapper, to make an app MPI-compliant (i.e. mpirun-able). This one could just differ from the original by implementing some MPIImageFileWriter instead of ImageFileWriters. Here is proposed to list components which could be developped in the framework of this eventual RFC:
- otb::MPIImageFileWriter
- otb::wrapper::MPIApplication
- ...
An OTB application whose implemented image processing pipeline is fully streamable, can be easily and quickly MPIized just by replacing the original application wrapper by the MPIized one. Here is an non-exhaustive list of OTB5.2 Applications which fulfill this requirement and thus could be so gently MPIised:
- otbcli_Quicklook
- otbcli_RadiometricIndices
- otbcli_BandMath
- otbcli_GrayScaleMorphologicalOperation
- otbcli_BandMathX
- otbcli_BinaryMorphologicalOperation
- otbcli_HaralickTextureExtraction
- otbcli_BlockMatching
- otbcli_Rescale
- otbcli_RigidTransformResample
- otbcli_HyperspectralUnmixing
- otbcli_SARDecompositions
- otbcli_ColorMapping
- otbcli_ImageClassifier
- otbcli_SARPolarMatrixConvert
- otbcli_KMeansClassification
- otbcli_SFSTextureExtraction
- otbcli_SarRadiometricCalibration
- otbcli_ConcatenateImages
- otbcli_Smoothing
- otbcli_Convert
- otbcli_MeanShiftSmoothing
- otbcli_Superimpose
- otbcli_TileFusion
- otbcli_MultivariateAlterationDetector
- otbcli_Despeckle
- otbcli_DimensionalityReduction
- otbcli_EdgeExtraction
- otbcli_OpticalCalibration
- otbcli_OrthoRectification
- otbcli_Pansharpening
- otbcli_PredictRegression
- ...
Other OTB applications could be parallelized but with a more specific implementation: e.g. otbcli_HomologousPointExtraction needs to dispatch tiles to process, then gather produced matching points. Not complicated, but needs a specific implementation.
Solutions must the thank to generate MPI applications when building OTB (if a OTB_USE_MPI or something like this is set). One naive solution proposed is the following: when building OTB, for each OTB app, if the app is flagged as "natively parallelized", then a derived MPI app is compiled just with the MPIApplication wrapper replacing the original one. Else (if the app is not flagged as "natively parallelized"), one specific MPI implementation for this particular app could be provided. If not, this app is simply not builded in its MPI flavour. This is just proposition, which of course needs to be discussed!
There is also some very interesting discussing about strategies a MPIImageFileWriter could lead (queues management, tiles dispatch, ...).
This functionality could be, for next releases, a completely experimental core update, as far as it does not require any modification in OTB code.
When will those changes be available (target release or date)?
Those changes could available for the next major release
Who will be developing the proposed changes?
Community
- Rémi Cresson
- ...
Comments
We are discussing about it here [[1]]
Support
List here community members that support this RFComments.
Corresponding Requests for Changes
List here links to corresponding Requests for Changes if any.