Previous Next V-Model Official Homepage by IABG  
QA Homepage  
8.3.3 Assessment Specification (AsSpec)  


  • Introduction
  • Document Index
  • Document Structure
  • Links to the V-Model Mailinglist
  • Introduction

    The Assessment Specification contains the description of assessment requirements and goals, assessment methods, assessment criteria derived from the requirements, and the test cases. Coverage of the requirements by the test cases is documented in a coverage matrix. With the help of the Assessment Specification it must be possible to decide if the assessment has been successful or not.

    An Assessment Specification is generated for each object to be assessed-cf. definition of Objects to be Assessed and Qualification Requirements in the Assessment Plan.

    It is possible to combine several Assessment Specifications into one document.

    Document Index

    1. General Information
    2. Requirements
          2.1. Classification of the Functional Unit with regard to Criticality and IT Security
          2.2. Assessment Requirements
    3. Assessment Methods
    4. Assessment Criteria
          4.1. Coverage
          4.2. Check Lists
          4.3. Termination Criteria
    5. Test Cases
          5.1. Test Case Description
          5.2. Coverage Matrix
                5.2.1. Architecture Elements and Interfaces
                5.2.2. User-level and Technical Requirements

    Document Structure

    1. General Information

    See schema 1. General Information.

    2. Requirements

    2. 1. Classification of the Functional Unit with regard to Criticality and IT Security

    The classification of the functional unit (System, Segment, SW Unit/HW Unit, SW Component, SW Module, Database) with regard to criticality and IT security has been defined in the development documents. It is adopted from these documents.

    2. 2. Assessment Requirements

    This chapter includes general requirements for an assessment. Examples are:

    3. Assessment Methods

    An assessment is separated into the sections preparation, execution, and evaluation. If the preparation and the evaluation are sufficiently described in the Assessment Procedure they can be dropped here.

    The preparation for an assessment includes the generation of test data. The methods and procedures for realizing this are determined and described, provided that they are not assumed to be known already.

    Methods to execute the assessment include static analysis, test, simulation, proof of correctness, symbolic program execution, review, inspection. The methods for executing the assessment are derived from the classification of the object to be assessed with respect to criticality and IT security, from measures assigned to the individual levels, and from other quality requirements.

    It is important to specify the kind of saving and evaluating results, in particular with regard to repeated assessments. It is described which of the data have to be stored during and after the assessment.

    Methods and procedures are determined and described, such as the use of automated comparison routines, personal expertise, maintaining a chronological log.

    4. Assessment Criteria

    This structural item states the criteria of each assessment. They must be established in such a way that the assessment can be evaluated with regard to the successful realization.

    4. 1. Coverage

    It is determined how deep the assessment has got to be (e. g. information about path coverage), so the suitability of the object to be assessed can be guaranteed. In general, the degree of coverage depends on the criticality of the object to be assessed.

    4. 2. Check Lists

    This contains a list of questions to be processed in the course of the product and activity assessment. The following assessment check lists must be possibly updated at least for each generic object to be assessed, in critical cases individual objects to be assessed have to be updated as well. It must be formulated so that potential errors are easily discovered. The check lists have to cover the various error sources sufficiently: The following includes examples of some basic questions for the check lists of the various objects to be assessed. As already mentioned above, they have to be completed and interpreted with regard to the individual project. The general "basic" check list and the corresponding "product" check list have to be used as a starting point for objects to be assessed.

    "Basic" Check List for each product to be assessed:

    (The following questions have in common that they can be answered merely on the basis of the present object to be assessed.)

    (The following questions refer to the development process of a product, i. e. other related products have to be used as input for the assessment.) "Product" Check Lists for the individual representatives of the corresponding product classes:

    (These questions refer to error sources typical for that kind of product, i. e. they refer to the content and the particularities of the corresponding products.)

    "Activity" Check List for each activity to be assessed:

    4. 3. Termination Criteria

    Termination criteria are stating conditions under which the assessment can be considered successfully terminated. This structural item contains both a description of termination criteria for a successful assessment (e. g. the required precision has been reached with a maximum deviation of +/- 0.0005) and for an unsuccessful assessment (e. g. message "overflow", "division by zero" "insufficient storage").

    5. Test Cases

    5. 1. Test Case Description

    This contains a description of With the test cases listed, the above mentioned termination criteria must be sufficiently met and also decidable.

    Test case descriptions can be specified for each of the following products:

    5. 2. Coverage Matrix

    This matrix documents the coverage of the requirements of the individual test cases for the object to be assessed.

    5. 2. 1. Architecture Elements and Interfaces

    This chapter contains a documentation of the coverage of architecture elements of an object to be assessed (e. g. overlap of integrated items by SW Modules, external and internal interfaces) and of code elements (e. g. branch/condition/path coverage) by the test cases.

    It is important to cover the assessments of the interfaces by corresponding test cases where the individual objects to be assessed are covered as well.

    5. 2. 2. User-level and Technical Requirements

    This chapter contains a documentation of the coverage of user-level and technical requirements (e. g. by covering equivalence classes and limit values or time and quantity requirements) by the test cases.

    Links to the V-Model Mailinglist

    Mail 0723 - Abdeckungsmatrix (723)
    Mail 0373 - Pruefkriterien V-Modell'97 (573)

    Previous Next This page online  •  GDPA Online  •  Last Updated 07.Mar.2004 by C. Freericks