Classification chain

From OTBWiki
Jump to: navigation, search

The SVM classification framework

The aim of the new SVM classification framework is to provide a supervised pixel-wise classification chain based on learning from multiple images. It supports huge images through streaming and multi-threading.

Please note that this framework is still in development. We would be glad to receive feedbacks on these tools.

Building a training data set

The chain is supervised : one has to build a training set with positive examples of different objects of interest. This can be done with the Vectorization Monteverdi module, by building a VectorData containing polygons centered on occurrences of the different objects of interest. This operation will be reproduce on each images used as input of the training function.

Please note that the positive examples in the vector data should have a Class field with a label higher than 1 and coherent in each different images.

Training the classification tool

The classification chain will perform a SVM training step based on the intensities of each pixel as features. Please note that the images will have the same number of bands to be comparable.

Images statistics

In order to make these features comparable between all the input images, the first step consists in estimating the statistics of the group of input images. These statistics will be used to center and reduce the intensities (mean of 0, std dev of 1) of samples based on the vector data produced by the user. To do so, the otbComputeImagesStatistics tool can be used:

otbComputeImagesStatistics-cli -il list_of_input_images -out statistics.xml

This tool will compute the mean of each band, compute the standard deviation based on pooled variance of each band, and finally export them to an XML file.

The features statistics XML file will be an input of the following tools.


Once images statistics have been estimated, the learning scheme is the following :

  1. For each of the input images
    1. Read the region of interest (ROI) inside the shapefile,
    2. Generate validation and training data within the ROI,
    3. Add the vectors respectively to the training samples set and the validation samples set.
  2. Center and reduce each sample using statistics from the XML statistics file,
  3. Increase the size of the training samples set and balance it by generating new noisy samples from the previous ones,
  4. SVM learn with this training set
  5. Estimate performances of the SVM classifier on the validation samples set (confusion matrix, precision, recall).

These steps can be performed by the otbTrainImagesClassifier using the following :

otbTrainImagesClassifier-cli -io.imstat images_statistics.xml list_of_input_images -io.vd list_of_positive_examples_shapefiles -io.out model.svm

Some options are available:

  • -elev.dem a DEM repository to keep accurate reprojection of vectordata
  • -classifier.libsvm.k svm_kernel (linear (default) / rbf / poly / sigmoid)
  • -classifier.libsvm.opt use libsvm parameters optimization
  • maximum_training_samples_size
  • maximum_validation_samples_size
  • -sample.vtr ratio_validation_training

Validate the classification model

It is also possible to estimate the performance of the svm model with a new set of validation samples and another image with the following application. It will compute the global confusion matrix and precision, recall and F-score of each class based on the ConfusionMatrixCalculator class. It is done by otbValidateImagesClassifier:

otbValidateImagesClassifier-cli -imstat images_statistics.xml -model model.svm -il list_of_input_images -vd list_of_positive_examples_shapefiles

You can save these results with the option -out output_filename. You can set a DEM repository (-elev.dem) to keep accurate reprojection of vectordata

Using the classification model

Once the classifier has been trained, one can apply the model to classify pixel inside defined classes on a new image using the otbImageClassifier tool:

 otbImageClassifier-cli -imstat images_statistics.xml -model model.svm -in input_image -out labeled_image

You can set an input mask to restricted the classification to the mask area with value >0.

Fancy classification results

In order to get an RGB classification map instead of greylevel labels, one can use the otbColorMapping tool. This tool will replace each label with an 8-bits RGB color specified in a mapping file. The mapping file should look like this :

# Lines beginning with a # are ignored
1 255 0 0

In the previous example, 1 is the label and 255 0 0 is a RGB color (this one will be rendered as red). To use the mapping tool, enter the following :

otbColorMapping-cli -in labeled_image -out color_image -method.custom.lut mapping_file


We take 4 classes: water, roads, vegetation and buildings with red roof. Data are available in the OTB-Data repository and this image is produced with the commands inside this file.

Input image to classify (Extract of Quickbird reprojected)
Result image with fusion (with monteverdi viewer) of original image and fancy classification
Result image: input image with fancy color classification from labeled image