Visualisation Refactoring

From OTBWiki
Jump to: navigation, search


This page describes the design decided for the visualisation classes refactoring.

The goals of this refactoring are the following:

  • Compatibility with the Model-View-Controller pattern in use for the applications,
  • More flexible and easy to use components,
  • Minimum effort extendability,
  • Clearer partition of the different features.

Overall design

Please note that some of the classes below have been split into base classes and sub-classes for better generalisation. The above overall schema remains valid, while some class names may have slightly changed. VisuRefactOverallScheme.jpg


The above design has been fully implemented.

The StandardImageViewer has been developed to provide a fully usable viewer class to plug at the end of a pipeline. It aims at replacing the current ImageViewer class. It contains all the necessary classes plugged together.

still in progress:

  • Fixing various bugs,
  • Implementing new action handlers,
  • Cosmetic enhancement of the StandardImageViewer,
  • Interface enhancement of the StandardImageViewer,
  • Implementing new Curves (such as profiles or coulds of points),
  • Implementing new GlComponent such as profiles mark or size rule.


This section describes the three main classes of the View part as well as their main interaction with the other parts.


The GlWidget is a base class for all widgets requiring an OpenGl context, such as ImageWidget or Curves2DWidget. It support both GL_USE_ACCEL modes. This class also has a string identifier later to be use to determine from which GlWidget an event comes from. Events as well as resizing and moving action are forwarded to an instance of ImageWidgetController for processing if any.


The ImageWidget class is derived from GlWidget and is responsible for basic image buffer rendering to screen. As such, it provides a method to read the OpenGl buffer from an image pointer and display it to the screen, as well as coordinates conversion helper methods. ImageWidget maintains a list of GlComponent to render additional things over the images, such as polygon, vector data, selection boxes ...


The Curves2DWidget is a GlWidget whose purpose is to render a set of curves to the screen. As such, it provides axis and scale rendering, and iterates on a list of subclasses of Curves. HistogramCurves is a good example to start with. Curves2DWidget supports auto-scale.


GlComponent and further subclasses aims at dealing with what needs to be displayed over an image. For instance, RegionGlComponent draws a rectangular region over an image, whereas VectorDataGlComponent render a VectorData (imported from either shapefile or kml file) over an image.


ImageViewerModelListener is a pure abstract class defining the minimum needed to listen from ImageViewerModel update. Any class deriving from this one will be able to register to the ImageViewerModel to recieve notifications.


ImageView is a concrete implementation of ImageViewerModelListener. It contains three instances of ImageWidget meant to be the well known scroll, full and zoom view on an image. Upon ImageViewerModel notifications, it updates one or more of those widget to reflect the ImageViewerModel content.


The PixelDescriptionEventListener aims at listening to updates from the PixelDescriptionModel. Usually, these update will be trigerred by hovering the image widgets.


PixelDescriptionView derives from PixelDescriptionEventListener and contains a fl text output which it maintains up to date when receiving updates.


The controller consists in a main controller class ImageWidgetController dispatching events automatically to dynamically loaded action plugins which are specialization of the ImageWidgetActionHandler. This scheme is base on the Chain of responsibilty pattern.


The ImageWidgetController manages a list of specialization of ImageWidgetActionHandler. When recieving one of event, resizing action or move action from a widget (identified by its string identifier), it looks up in this list to find any suitable action handler to handle it. All registered action handlers are receiving the event and processing it. If no suitable action handler is found, the event is discarded.


The ImageWidgetActionHandler is an abstract class defining the minimum needed to implement action handling. Any class deriving from this one can be used as an action handler in the ImageWidgetController class. Three methods are defined to handle events, resizing actions and move actions. These method returns true if the event is handled and false otherwise. Please note that these methods also recieve the string identifier of the emitting widget, so that an action handler can respond only to an event from a given widget. Specializations of ImageWidgetActionHandler might have full access to the ImageView and/or the ImageViewerModel.

Standard subclasses of ImageWidgetActionHandler includes :

  • WidgetResizingActionHandler : Implements basic resizing controll over the model for the scroll, the full and the zoom view.
  • ChangeExtractRegionActionHandler : Change the region displayed in the full view,
  • ChangeScaledExtractRegionActionHandler : Change the region displayed in the zoom view,
  • ChangeScaleActionHandler : Change the zoom factor of the zoom view.
  • PixelDescriptionActionHandler : When hovering the image, triggers the update of the PixelDescriptionModel.


The ImageViewerModel is in charge of rendering and rasterizing a stack of objects deriving from the abstract type Layer. An ImageLayer is one of the most obvious possible derived type, though there might be other types of layer, such as VectorLayer or PointSetLayer for instance (not yet implemented). A layer owns a specialization of BlendingFunction defining how to blend it with the next layer in the stack. In addition, an ImageLayer owns a specialization of RenderingFunction used to cast the input image spectral space into the displayable space (itk::RGBPixel<unsigned char> for instance).


A layer is an abstract class representing data that can be directly rendered to screen in a Scroll/Full/Zoom way. As such, it defines abstract method such as Render() to be further implemented in specialization, as well as thee rendered image pointers corresponding to each of Scroll/Full/Zoom view.


An ImageLayer is a specialization of Layer designed to render images to the screen. It holds an input image pointer, as well as a quicklook pointer (quicklook has to be generated outside of the layer) and a histogram pointer. It also owns a specialization of RenderingFunction responsible for converting the input image space to RGB displayable data. The implementation of the Render() method from the base class Layer rengenerate the histogram if needed, extract the Full (ExtractRegion) and Zoom (ScaledExtractRegion) views from the input image, and then render the three views to RGB displayable data using the RedenderinFunction along with the RenderingImageFilter. Commodities are provided to use the image histogram as a source for the minimum and maximum values used for rendering.

RenderingImageFilter and RenderingFunction

A RenderingFunction is a base class defining how to map the input image space to an RGB displayable space. Rendering is done through two Evaluate() methods, the first processing scalar pixels, and the second VariableLengthVector ones. Additional methods include passing the input minimum and maximum values as well as an abstract Intialize() method in case some stuff could be performed only once before rendering the image. The RenderingFunction is to be used with the RenderingImageFilter (which derives from itk::UnaryFunctorImageFilter).

The first available specialization of RenderingFunction is StandardRenderingFunction.

The StandardRenderingFunction does the pixel rendering in two steps:

  • pixel representation: extract some channels, simple computation from the channel values (amplitude, phase, etc).
  • function transfer: scale the representation between 0 and 255 to be able to view it on screen. The transfer function can be linear or not, used specified or precomputed min/max, etc.

Using the default templates, it performs a standard per-channel normalization based on a transfer function as a template parameter. For natural images rendering, this function is likely to be the one you need.


This is the base class for model based on a layer list. It provides the layer list management facilities.


The ImageLayerRenderingModel manages a stack of abstract Layer instances (using standard methods such as Add/Remove/Clear/GetByName ...). By inheriting from MVCModel, it also holds a list of listeners (instances derived from the ImageLayerRenderingModelListener class) to be notified when the model changes. The public Update() method triggers the rendering of all visible layers, followed by the rasterization of the rendered layers. The rendering of each visible layer is done by first communicating to the Layer the extracted and scaled extracted region to render, and then triggering the abstract method Render() from the Layer class. What to do to render a layer is left to specialization of the abstract Layer class (see ImageLayer for instance). Once all visible layer have been generated, each visible rendered layer is blended with the following visible layer (if any) in the stack, using its BlendingFunction along with a local instance of the BlendingImageFilter. When this is done, the ImageViewerModel notifies all its listener for changes.

BlendingImageFilter and BlendingFunction

A BlendingFunction is a base class defining how to blend two RGB images into a single one. Blending is done through an Evaluate() method taking the two input pixels as aguments. The BlendingFunction is to be used along with the BlendingImageFilter (which derives from itk::BinaryFunctorImageFilter).

The first available specialization of BlendingFunction is UniformAlphaBlendingFunction. It performs a standard alpha blending between two layers and is likely to be the one you need for basic transparency between layers.

The convenience class ImageLayerGenerator

Building an ImageLayer by hands can be quite heavy, so the ImageLayerGenerator is here to help you. It will generate a fully operationnal layer that you can fine tune afterward. In particular, it will generate the quicklook if you ask it to, or at least advise you an appropriate subsampling rate based on your screen resolution.


The PixelDescriptionModel aims at describing the pixel at the given location from a set of layers.


How do I use this new design in my application ?

In this tutorial, we assume that your application follows the Model/View/Controller pattern. If you are not, the tutorial remains valid, simply ignore the code location advices (such as "In your ... code"). Please also note that if you have Model/View/Controller classes, you should rather define the following locale variables as members.

First, include the approriate headers. In your Model code:

#include "itkRGBPixel.h"
#include "otbImageLayerGenerator.h"
#include "otbImageLayer.h"

In your View code (or anywhere else if you do not follow the Model/View/Controller pattern):

#include "otbImageView.h"

In your Controller code (or anywhere else if you do not follow the Model/View/Controller pattern):

#include "otbImageWidgetController.h"
#include "otbWidgetResizingActionHandler.h"
#include "otbChangeScaledExtractRegionActionHandler.h"
#include "otbChangeExtractRegionActionHandler.h"
#include "otbChangeScaleActionHandler.h"
#include "otbImageLayerRenderingModel.h"

Then, we will typedefs each type (assuming there is already an ImageType defining the image type you are working with, either Image or VectorImage, it does not matter) : In the Model code:

typedef itk::RGBPixel<unsigned char>                   RGBPixelType;
typedef otb::Image<RGBPixelType,2>                     OutputImageType;
typedef otb::ImageLayer<ImageType>                     LayerType;
typedef otb::ImageLayerGenerator<LayerType>            LayerGeneratorType;
typedef otb::ImageLayerRenderingModel<OutputImageType> ModelType;

In the View code:

typedef otb::ImageView<ModelType>                   ViewType;

In the Controller code:

typedef otb::ImageWidgetController                   ControllerType;
typedef otb::WidgetResizingActionHandler
   <ModelType,ViewType>                             ResizingHandlerType;

In a standard case, you will at least need the WidgetResizingActionHandler to handle seamlessly widget resizing. Other handlers can be removed or replaced by custom handlers at will.

Now, we will create instances of each objects: In The Model code:

// Instantiate the model
ModelType::Pointer model = ModelType::New();
// Generate an image layer
LayerGeneratorType::Pointer generator = LayerGeneratorType::New();
generator->SetImage( /** Place the image source here */);

In the View code:

// Instanstiate the view 
ViewType::Pointer view = ViewType::New();
// Set the model
view->SetModel(/** Place a pointer to the model here */);
view->SetController(/** Place a pointer to the controller here */);

In the Controller code:

// Build a controller
ControllerType::Pointer controller = ControllerType::New();
// Add the resizing handler
ResizingHandlerType::Pointer resizingHandler = ResizingHandlerType::New();
resizingHandler->SetModel(/** Place a pointer to the model here */);
resizingHandler->SetView(/** Place a pointer to the view here */);

More action handlers can be added to the controller at will.

Integrating a widget from the View into an FLTK widget from your own application is as simple as (view is supposed to be a pointer to the view):

view->GetFullWidget()->resize(0,0, /** Your size here */);

In that case, do not forget to remove it from the FLTK widget before destructing it:


How do I create my own RenderingFunction ?

To be continued

How do I create my own BlendingFunction ?

To be continued

How do I create my own ActionHandler ?

To be continued