Note the Visualisation Refactoring details are on another page.
This page is intended to define the behaviour and features that the refactored visualisation classes should handle. Items are listed without any priority assesment.
- Visualisation classes should act like other filters: First plug the inputs and tune some parameters, and then do a single call to the Update() methods. Changing the input or parameters and calling Update() again should update everything without any restriction.
- For now, the visualisation classes derive from OpenGl windows. Pushing the previous point a little further, we could consider that a visualisation filter is a ProcessObject with images as inputs and OpenGl displays as outputs. This would mean creating DataObjects corresponding to the OpenGl windows.
- Normalization should be done using already existing OTB filters within some internal or external pipeline.
- Quicklook generation should be done using already existing OTB filters within some internal or external pipeline.
- It should be possible to display overlays (images and vector data) through a lookup table. => this involves also a more global reflexion on lookup table in OTB in general
- Decomposing the viewer into its element is necessary separation between the three windows, the histograms and the pixel location and value have to be respected. Some applications may need the scroll only (eg orthorectification), or the zoom... However, as quite often, the three windows are needed, it would be easier to have a ready to use component with the interaction between windows ready.
- pixel location and value may not be tight to one viewer: when several window are displayed, only one pixlocation should be displayed. It will be easier with a MVC model.
- Where to put object drawing? Are drawing polygons, lines, etc similar on the viewer and directly on an image to save? Could this be factorized?
- Action plugin system: quite often the thing we need to change in the visualization system between applications is the action of the click. For some applications, successive clicks should add points to a point set, for others successive clicks will define a line until a double click to finish it, other will close the line at this final click, leading to a polygon. The underlying model for these actions already exist (otb::Polygon for eg.), the view will need to specify the general callbacks. All we need is to define an action controller to link the two. This action controller would be called by the global application controller.
- We put too much responsability on the base widget: The only thing this base widget should be responsible for is to render a piece of RGB unsigned char data to the screen along with overlays (include a piece of RGBA unsigned char data, a labeled image with a color table, and vector data). It should for sure not trigger pipeline update, add polygons to a list or whatever it is doing for now. The base widget is 'the View': it should render to screen and forward users interactions, nothing more, nothing less.
- More on action plugins: The list of possible actions on a base widget is really short. Mouse clicks, mouse drag, mouse scroll, and perhaps keystrokes. The action controller has a really simple interface.
- What is the model ? The model includes everything happening before rendering : a pipeline including ROI extraction, image normalization ...