Universität Bremen  
  Universität Bremen FB3 TZI BISS  
  AG BS > Deutsch
English
 

Embedded Systems Testing Benchmarks Site

 

Welcome to the
Embedded Systems Testing Benchmarks Site

This site has been created on 2011-05-23 and will be continuously updated and maintained by the research team AGBS - operating systems and distributed systems at the University of Bremen, Department of Mathematics and Computer Science FB3.

Objectives

The Embedded Systems Testing Benchmarks Site will provide benchmarks for automated model-based testing (MBT) tools. A suggestion how to structure these benchmarks has been published in
    [1] Jan Peleska, Artur Honisch, Florian Lapschies, Helge L�ding, Hermann Schmid, Peer Smuda, Elena Vorobev, and Cornelia Zahlten: A Real-World Benchmark Model for Testing Concurrent Real-Time Systems in the Automotive Domain. Burkhart Wolff and Fatiha Zaidi (Eds.): Testing Software and Systems. Proceedings of the 23rd IFIP WG 6.1 International Conference, ICTSS 2011, Paris, France, November 2011, Springer, LNCS 7019, pp. 146-161 (2011).
An extended version of this article may be downloaded here PDF File. Suggestions from other researchers for extending or improving the benchmark criteria are welcome; to this end, please contact Jan Peleska, jp_at_informatik.uni-bremen.de

Basically, the components of a MBT benchmark are

  • a formal model from where automated test cases and test data should be derived,
  • a test suite that has been generated by means of a reference tool and
  • benchmark evaluation data associated with the tool and the test suite.
As suggested by the authors in [1; Section 4], benchmarks for model-based testing should be structured into two sub-classes.
  • Test strength benchmarks investigate the error detection capabilities of concrete test cases and test data generated by MBT tools.
  • Test generation benchmarks input formal specification of test cases (symbolic test cases according to the terminology used in [1]) and measure the time needed to generate concrete test data.
The first benchmark is provided by the researchers listed above; it also serves as an example of how other contributions should be structured.

Benchmarks

  • Turn Indicator Model Rev. 1.6 (last update: 2014-04-17)
    This model is currently used by Daimler for system tests of functionality related to the turn indicator lights. An introductory overview over the model is given in [1] above; more detailed explanations are contained in the model itself, in the form of notes.
  • Model Based Testing from Controlled Natural Language Requirements (last update: 2013-09-16)
    These benchmarks are currently elaborated here to present extensive material for model-based testing against (controlled) natural language (CNL) requirements: specifications written in natural language style are parsed and evaluated with respect to their behavioural semantics. From this evaluation a formal model is created in an automated way, so that the principles, algorithms and tools of model-based testing can be applied to create test cases and test oracles verifying systems directly against their requirements. The purpose of this approach is to allow for application of model-based testing in early phases of a project, where detailed formal test models - e.g. like the one made available above - are still under construction.

    The material on CNL testing currently made available complements the article Model Based Testing from Controlled Natural Language Requirements submitted by the authors Gustavo Carvalho, Flavia Barros, Florian Lapschies, Uwe Schulze, and Jan Peleska to the FTSCS 2013, Second International Workshop on Formal Techniques for Safety-Critical Systems.

  • Ceiling Speed Monitoring Model Rev. 1.0 (last update: 2014-05-11)
    The ceiling speed monitoring model is a functionality of the European Vital Computer EVC, the on board controller that is described in the public European Train Control System ETCS specification. The ceiling speed monitoring is a part of the speed and distance monitoring function that ensures “... the supervision of the speed of the train versus its position, in order to assure that the train remains within the given speed and distance limits.” [UNISIG. ERTMS/ETCS SystemRequirements Specification, Chapter 3,sec. 13.1.1]. This particular sub-function supervises the observance of the maximal speed allowed according to the current most restrictive speed profile (MRSP).
  • Experimental data for Safety-complete H-Method This zip-archive contains the experiment data presented and discussed in

    Wen-ling Huang, Sadik Ozoguz, and Jan Peleska: Safety-complete Test Suites.

    This article is currently under review for publication in the Software Quality Journal. We will present a detailed discussion of this data here, after its publication.

  • Experimental data for Strong Reduction Testing This zip-archive contains the experiment data presented and discussed in

    Robert Sachtleben and Jan Peleska: Effective Grey-Box Testing With Partial FSM Models.

    This article has not yet been published. We will present a detailed discussion of this data here, after its publication.

 
   
Author: jp
 
  AG BS 
Last updated: November 2, 2022   Impressum