Universität Bremen  
  Universität Bremen FB3 TZI BISS  
  AG BS > Deutsch
English
 

Embedded Systems Testing Benchmarks Site

 

Welcome to the
Embedded Systems Testing Benchmarks Site

This site has been created on 2011-05-23 and will be continuously updated and maintained by the research team AGBS - operating systems and distributed systems at the University of Bremen, Department of Mathematics and Computer Science FB3.

Objectives

The Embedded Systems Testing Benchmarks Site will provide benchmarks for automated model-based testing (MBT) tools. A suggestion how to structure these benchmarks has been published in
    [1] Jan Peleska, Artur Honisch, Florian Lapschies, Helge Löding, Hermann Schmid, Peer Smuda, Elena Vorobev, and Cornelia Zahlten: A Real-World Benchmark Model for Testing Concurrent Real-Time Systems in the Automotive Domain. Burkhart Wolff and Fatiha Zaidi (Eds.): Testing Software and Systems. Proceedings of the 23rd IFIP WG 6.1 International Conference, ICTSS 2011, Paris, France, November 2011, Springer, LNCS 7019, pp. 146-161 (2011).
An extended version of this article may be downloaded here PDF File. Suggestions from other researchers for extending or improving the benchmark criteria are welcome; to this end, please contact Jan Peleska, jp_at_informatik.uni-bremen.de

Basically, the components of a MBT benchmark are

  • a formal model from where automated test cases and test data should be derived,
  • a test suite that has been generated by means of a reference tool and
  • benchmark evaluation data associated with the tool and the test suite.
As suggested by the authors in [1; Section 4], benchmarks for model-based testing should be structured into two sub-classes.
  • Test strength benchmarks investigate the error detection capabilities of concrete test cases and test data generated by MBT tools.
  • Test generation benchmarks input formal specification of test cases (symbolic test cases according to the terminology used in [1]) and measure the time needed to generate concrete test data.
The first benchmark is provided by the researchers listed above; it also serves as an example of how other contributions should be structured.

Benchmarks

  • Turn Indicator Model Rev. 1.4 (last update: 2011-08-03)
    This model is currently used by Daimler for system tests of functionality related to the turn indicator lights. An introductory overview over the model is given in [1] above; more detailed explanations are contained in the model itself, in the form of notes.

  • Model Based Testing from Controlled Natural Language Requirements (last update: 2013-05-09)
    This benchmark is currently elaborated here to present extensive material for model-based testing against (controlled) natural language (CNL) requirements: specifications written in natural language style are parsed and evaluated with respect to their behavioural semantics. From this evaluation a formal model is created in an automated way, so that the principles, algorithms and tools of model-based testing can be applied to create test cases and test oracles verifying systems directly against their requirements. The purpose of this approach is to allow for application of model-based testing in early phases of a project, where detailed formal test models - e.g. like the one made available above - are still under construction.

    The material on CNL testing currently made available complements the article Model Based Testing from Controlled Natural Language Requirements submitted by the authors Gustavo Carvalho, Flavia Barros, Florian Lapschies, Uwe Schulze, and Jan Peleska to the FMICS 2013, 18th International Workshop on Formal Methods for Industrial Critical Systems.

    Further material and extensive expositions on CNL testing will follow soon.

 
   
Author: jp
 
  AG BS 
Last updated: May 9, 2013   Impressum