Request for Comments-29: Testing, Benchmarking & Validation of Monteverdi

From OTBWiki
Jump to: navigation, search

Status

  • Author: Stéphane ALBERT
  • Submitted on May ??, 2016
  • Open for comments

Content

What changes will be made and why they would make a better Orfeo ToolBox?

This Request For Comments is about validating, testing & benchmarking of Monteverdi which is an interactige Graphical User Interface (GUI) software program redering OpenGL scenes.

A classical way to test GUIs is to have human n-users manually testing the software:

  • according to some written validation plan (atomic test-cases of functionalities); and
  • freely, without any constraints.

This method is:

  • lengthy
  • requires one or more people who have not been involve in the development/design process
  • takes lot of time
  • is repetitive (which is source of human errors when validating).

Even if manual validation of GUI applications is hardly avoidable, the goal of this RFC is to discuss ways and tools to automate some parts of the validation, testing and benchmarking of Monteverdi. This will save some production time and increase the overall quality of the software.

To reach this goal, several topics should be addressed:

  1. How rendering performances could be measured (and eventually compared)?
  2. Which data should serve as reference testing data?
  3. How can GUI validation, testing and benchmarking be automated?
  4. Which production cost will be needed to develop and maintain testings scripts?
  5. How can validation, testing and benchmarking be integrated into the OTB/Monteverdi release process?

Data

Because Monteverdi cannot be tested with all user data, validation, testing & benchmarking of Monteverdi should be made on some (agreed) representative data and/or combinations of singular items which will be considered as reference data:

  • supported formats (TIF, JP2K etc.)
  • supported products encoding (PLEIADES, QUICKBIRD, SPOT5 etc.)
  • representativily sized data.
  • some exceptional test-case data:
    • 1x1 pixel images
    • 0 representative pixel image such image filled only with no-data value
  • Compound Data Set
  • etc.

Rendering performance (basic benchmarking)

Complete benchmarking of GUI, OpenGL rendering plus data IOs could be a complex task. However, an accurate framerate frequency (inverse of rendering time), such as in video games, could be measured using performance counters [1] [2] [3] and registered into memory buffers for each otb::GlView and for each frame. When benchmarking/testing is finished, this result could be output into some textual trace file for archiving or non-regression comparison.

Notes about Performance Counters:

A portable performance counter API should be added into the OTB library and implemented for each supported platform platform (Windows, OSX and GNU/Linux) [4].

Moreover, the otb::GlView::HeaveRender() [5] performs data update before calling ::LightRender() [6]. So, ::HeavyRender() time equals ::LightRender() plus data update time.

Frame-rate frequencies could be displayed as text in overlay of each otb::GlView in the Ice engine so that both IceViewer and Monteverdi (and possibly and external application) could benefit of it.

As an option, a multi-color graphical polyline chart of frame-rate frequencies, such a systems process manager, could be displayed as overlay of the otb::GlView and included in the Ice rendering engine to visually check performances during development and human testing.

Note about benchmarking and performances:

Benchmarking & performance measurements must be make on some reference platforms (OS, native or virtual-machine, same 3D hardware, same OpenGL driver and libraries) without any other process running on platform except system processes (OS and windowing system) and benchmarking scripts. Otherwise, measurement will not be representative nor comparable.

Automation of Ice rendering engine benchmarking

The Ice rendering engine is used in Monteverdi through Qt OpenGL widgets. The automation of testing these OpenGL widgets (along with their content) is the same as for general GUI widgets (see Automation of GUI testing). However a GUI independant automation of Ice rendering engine benchmarking could be done via the use of an integrated replayer so that rendering benchmarking could be compared from build to build on the same basis.

A replayer API could be implemented into the Ice rendering engine in order to animate move of the view through given arbitrary test data along a pre-defined arbitray polyline path (which simulates user's navigation in the viewport). Several polyline paths could be saved in external file given as input data along with arbitrary input products (see Reference data) and passed to the replayer API. Rendering frequencies could be recorded into memory during the replay and output to textual log files for post-processing after benchmarking has finished. Replay could be run nightly/automatically or at tester's demand with interactive display of rendering frequencies.

Automation of GUI testing

Beside the testing framework provided within the Qt framework, there are several other tools which might be of interest to test the Monteverdi GUI [7]. Some of these tools will be discussed hereafter.

QTestLib

The Qt framework provides the QTest module [8] within the QTestLib [9] framework in order to automate testing of GUI components. It can simulate user events by calling C++ functions of the API which activate Qt signal/slots mechanism. Results are then checked via direct C++ function calls [10]. These seem to be usefull tools to automate unit testing of GUI components such as individual widgets. Even if they could be used to test the overall Monteverdi application, it may not be their primary goal, mostly, because they need knowledge of the C++ GUI source code. QTestLib may not be used to test nightly/release packages. However, QTestLib based unig testing may be integrated to the OTB nightly CTest framework and dashboard. This testing framework also provides some benchmarking components.

Note: These tools could also be of good use to test the GUI of OTB-applications.

The QTestLibframework could be usefull to:

  • Script unit testing of each Monteverdi component separately
  • Script validation of Monteverdi overall GUI by duplicating the Monteverdi main() into some test class
  • Script benchmarking of Monteverdi Ice views
  • Include Qt unit testing into CTest nightly testing and dashboard overview
  • Optionally, script unit testing of the GUIs of OTB-applications

The QTestLib framework will not be usefull to:

  • Test nightly and release packages
The Linux Desktop Testing Project

LDTP [11] is a cross platform GUI test automation tool which uses the Assistive Technology API (AT-API) to communicate with the desktop and GUI applications provided those are accessibility enabled (which is a feature of the Qt framework and main GNU/Linux desktop environments). This design makes it desktop, GUI API, graphics and source code agnostic. It is based on the concepts of the Software Automation Framework Support [12].

It provides some scripting languages API (one of which is Python [13]) to script testing and access the components of the desktop and GUI applications via the AT-API (see extensive tutorial [14]). Such as for QTestLib the API allows to access GUI widgets, check their content but also, wait for some event or result, conditionnaly check if some dialog has popped up and take screenshots.

Compared to QTestLib, the LDTP grants the same advantages but can also be used to test the nightly and release packages. Disadvantages might be:

  • Needs another language to script the tests than C++ (which is used in OTB and Monteverdi)
  • Be more difficult to integrate into the CTest unit testing process and dashboard status.
Sikulli

Sikulli [15] is another tool which is more of an automation scripting facility than a testing tool. It works by using image recognition. So, it may require that the testing scripts be adapted for each desktop theme and/or environment.

The LDTP seems more appropriate than Sikuli.

Development and maintenance of testing scripts

Independently of which testing framework will be chosen, an exhaustive unit testing would require each component and each feature to be unit tested as a single item and also as a part of the whole Monteverdi OpenGL GUI application. If given feature or component can be automatically tested, it would require one or more test classes/functions and/or input data.

Moreover, when the Monteverdi GUI of features are improved or modified, testing scripts would have to be maintained.

Finally, output of unit testing would have to be analyzed regularly.

Validation process

To be effective, validation, testing and benchmarking of Monteverdi would best be done by being integrated to the release process and run on some frozen and non-evolving source code such as a release or stable candidate or a dedicated stable branch of the source code repository. This point is important because any modification of the source code could invalidate some previously run tests.

For example, a first release-candidate #0 could be done to run the whole validation process to find bugs, regressions, crashes etc. When this step is done, results would be analyzed and registered to the bugtracker and tagged as major, minor, blocking/non-blocking etc. They will then be fixed and a new release-candidate #1 is packaged. Same steps applies until the remaining issues are considered minor and non-blocking.

In this way, we ensure that the validation plan converges to some acceptable target and that the remaining issues are decreasing.

Moreover, even if automatic validation, testings and benchmarking is helpfull, human testers, which are not involved developers nor designers, should be included in the validation process to report bugs and to give their feedback.

When will those changes be available (target release or date)?

OTB-5.6/Monteverdi-3.4

Who will be developing the proposed changes?

TBD

Community

Comments

Support

Corresponding Requests for Changes

TODO