Traceability Configuration

Reference of the Ketryx Traceability Configuration

Ketryx allows full customization for the displayed columns, column relationships / traces, traceability checks, including their corresponding error status messages, and more.

This page acts as a reference for all the various configurations possible beyond the default traceability setup described in MAN-07 Traceability.

How to configure

The traceability configuration can be configured in the Advanced Settings >> Traceability configuration field and needs to be stored as a valid JSON structure.

The configuration described in this document requires {"version": "3"} to be enabled. If version is not set or {"version": "2"} is set, refer to MAN-07 Traceability.

Version 2 vs 3

All of the configurations discussed in this document require the "version": "3" attribute to activate the new RTM customizations. Existing projects not enabling version 3 will still use the version 2 configuration format described in MAN-07 Traceability.

Base example

The following example represents the default configuration for the strictest project Ketryx schema and will be used to highlight possible configurations in the system.

{
  "version": "3",
  "rtmApprovalEnabled": false,
  "enforceGreenCheck": true,
  "defaultColumnId": "designInput",
  "statusDefinitions": {
    "REQUIREMENT_MISSING": {
      "level": "ERROR"
    },
    "SPEC_MISSING": {
      "level": "ERROR"
    },
    "MANUAL_TEST_EXECUTION_MISSING": {
      "level": "ERROR"
    },
    "TEST_EXECUTION_MISSING": {
      "level": "ERROR"
    },
    "TEST_EXECUTION_MISSING_RESULT": {
      "level": "ERROR"
    },
    "VERIFICATION_TEST_MISSING": {
      "level": "ERROR"
    },
    "VALIDATION_TEST_MISSING": {
      "level": "ERROR"
    },
    "TEST_EXECUTION_FAILED": {
      "level": "ERROR"
    },
    "NOT_INCLUDED_IN_TEST_PLAN": {
      "level": "MUTED"
    },
    "TEST_EXECUTION_NOT_CONTROLLED": {
      "level": "WARNING"
    },
    "RISK_NOT_CONTROLLED": {
      "level": "WARNING"
    },
    "NOT_CONTROLLED": {
      "level": "WARNING"
    }
  },
  "rowStatusMessages": {
    "REQUIREMENT_MISSING": {
      "message": {
        "subject": "Use case",
        "description": "not covered by a design input."
      }
    },
    "SPEC_MISSING": {
      "message": {
        "subject": "Design input",
        "description": "not covered by a design output."
      }
    },
    "VERIFICATION_TEST_MISSING": {
      "message": {
        "subject": "Design output",
        "description": "not covered by a verification test."
      }
    },
    "VALIDATION_TEST_MISSING": {
      "message": {
        "subject": "Design input",
        "description": "not covered by a validation test."
      }
    },
    "MANUAL_TEST_EXECUTION_MISSING": {
      "message": {
        "subject": "Test Case",
        "description": "without manual test executions for automated tests."
      }
    },
    "TEST_EXECUTION_MISSING_RESULT": {
      "message": {
        "subject": "Test Execution",
        "description": "without a test result."
      }
    },
    "TEST_EXECUTION_MISSING": {
      "message": {
        "subject": "Test Case",
        "description": "without a test execution."
      }
    },
    "TEST_EXECUTION_FAILED": {
      "message": {
        "subject": "Test Execution",
        "description": "failed."
      }
    },
    "NOT_CONTROLLED": {
      "aliases": [
        "RISK_NOT_CONTROLLED",
        "TEST_EXECUTION_NOT_CONTROLLED"
      ],
      "message": {
        "subject": "Item",
        "description": "not fully approved yet."
      }
    }
  },
  "cellStatusMessages": [
    {
      "if": [
        "REQUIREMENT_MISSING"
      ],
      "message": "Requirement missing"
    },
    {
      "if": [
        "SPEC_MISSING",
        "VERIFICATION_TEST_MISSING",
        "VALIDATION_TEST_MISSING"
      ],
      "message": "Specification and test cases missing"
    },
    {
      "if": [
        "SPEC_MISSING",
        "VERIFICATION_TEST_MISSING"
      ],
      "message": "Specification and test case missing"
    },
    {
      "if": [
        "SPEC_MISSING",
        "VALIDATION_TEST_MISSING"
      ],
      "message": "Specification and test case missing"
    },
    {
      "if": [
        "SPEC_MISSING"
      ],
      "message": "Specification missing"
    },
    {
      "if": [
        "VERIFICATION_TEST_MISSING"
      ],
      "message": "Verification test case missing"
    },
    {
      "if": [
        "VALIDATION_TEST_MISSING"
      ],
      "message": "Validation test case missing"
    },
    {
      "if": [
        "NOT_INCLUDED_IN_TEST_PLAN"
      ],
      "message": "Not included in test plan"
    },
    {
      "if": [
        "TEST_EXECUTION_MISSING_RESULT"
      ],
      "message": "Test execution result missing"
    },
    {
      "if": [
        "TEST_EXECUTION_MISSING"
      ],
      "message": "Test execution missing"
    },
    {
      "if": [
        "MANUAL_TEST_EXECUTION_MISSING"
      ],
      "message": "Manual test execution missing for automated test"
    },
    {
      "if": [
        "TEST_EXECUTION_FAILED"
      ],
      "message": "Test execution failed"
    },
    {
      "if": [
        "TEST_EXECUTION_NOT_CONTROLLED"
      ],
      "message": "Missing approval for Manual Test Execution"
    },
    {
      "if": [
        "TEST_EXECUTION_NOT_CONTROLLED"
      ],
      "message": "Missing approval for Manual Test Execution"
    },
    {
      "if": [
        "RISK_NOT_CONTROLLED"
      ],
      "message": "Missing approval for Risk"
    },
    {
      "if": [
        "NOT_CONTROLLED"
      ],
      "message": "Missing approval"
    }
  ],
  "columns": [
    {
      "columnId": "useCase",
      "title": "Use cases",
      "kind": "design",
      "itemFilter": "type:RQ and \"Requirement type\":\"Use case\"",
      "matchCrossReferences": false
    },
    {
      "columnId": "designInput",
      "title": "Design Input",
      "kind": "design",
      "itemFilter": "type:RQ and not \"Requirement type\":\"Use case\" and not \"Requirement type\":\"Intended use\"",
      "relations": [
        {
          "kind": "indirect",
          "relationType": "HAS_PARENT",
          "referencedColumnId": "useCase"
        }
      ],
      "matchCrossReferences": false
    },
    {
      "columnId": "designOutput",
      "title": "Design Output",
      "kind": "design",
      "itemFilter": "type:SW or type:HW",
      "relations": [
        {
          "kind": "direct",
          "relationType": "FULFILLS",
          "referencedColumnId": "designInput"
        }
      ],
      "matchCrossReferences": false
    },
    {
      "columnId": "verificationTest",
      "title": "Verification test",
      "kind": "testing",
      "testedItemFilter": "type:SW or type:HW",
      "referencedColumnIds": [
        "designOutput"
      ],
      "matchCrossReferences": false
    },
    {
      "columnId": "validationTest",
      "title": "Validation test",
      "kind": "testing",
      "testedItemFilter": "type:RQ",
      "referencedColumnIds": [
        "designInput"
      ],
      "matchCrossReferences": false
    }
  ],
  "checks": [
    {
      "checkId": "useCasesCovered",
      "kind": "coverage",
      "title": "Use cases",
      "subtitle": "Covered by design input",
      "filterDescription": "use cases not covered by a design input",
      "columnIds": [
        "useCase"
      ],
      "coveredByColumnId": "designInput",
      "checkCrossReferences": false,
      "onFail": {
        "status": "REQUIREMENT_MISSING"
      }
    },
    {
      "checkId": "designInputsCovered",
      "kind": "coverage",
      "title": "Design input",
      "subtitle": "Covered by design outputs",
      "filterDescription": "design inputs not covered by a design output",
      "columnIds": [
        "designInput"
      ],
      "coveredByColumnId": "designOutput",
      "checkCrossReferences": false,
      "onFail": {
        "status": "SPEC_MISSING"
      }
    },
    {
      "checkId": "verificationTestCoverage",
      "kind": "coverage",
      "title": "Verification",
      "subtitle": "Design outputs covered by tests",
      "filterDescription": "design outputs not covered by a verification test case",
      "columnIds": [
        "designOutput"
      ],
      "coveredByColumnId": "verificationTest",
      "checkCrossReferences": false,
      "onFail": {
        "status": "VERIFICATION_TEST_MISSING"
      }
    },
    {
      "checkId": "validationTestCoverage",
      "kind": "coverage",
      "title": "Validation",
      "subtitle": "Design inputs covered by tests",
      "filterDescription": "design inputs not covered by a validation test case",
      "columnIds": [
        "designInput"
      ],
      "coveredByColumnId": "validationTest",
      "checkCrossReferences": false,
      "onFail": {
        "status": "VALIDATION_TEST_MISSING"
      }
    },
    {
      "checkId": "testExecutions",
      "kind": "testExecutions",
      "title": "Test executions",
      "subtitle": "Created within test plan",
      "filterDescription": "test cases missing a test execution or test result",
      "onMissingManualExecution": {
        "status": "MANUAL_TEST_EXECUTION_MISSING"
      },
      "onMissingExecution": {
        "status": "TEST_EXECUTION_MISSING"
      },
      "onMissingExecutionResult": {
        "status": "TEST_EXECUTION_MISSING_RESULT"
      },
      "onNotInTestPlan": {
        "status": "NOT_INCLUDED_IN_TEST_PLAN"
      }
    },
    {
      "checkId": "failedTests",
      "kind": "failedTests",
      "title": "Failing tests",
      "subtitle": "Within test plan",
      "filterDescription": "failing test executions",
      "onFail": {
        "status": "TEST_EXECUTION_FAILED"
      }
    },
    {
      "checkId": "allItemsControlled",
      "kind": "controlled",
      "title": "Controlled",
      "subtitle": "Items fully approved",
      "filterDescription": "uncontrolled items that need to be approved",
      "checkCrossReferences": false,
      "onTestExecutionNotControlled": {
        "status": "TEST_EXECUTION_NOT_CONTROLLED"
      },
      "onRiskNotControlled": {
        "status": "RISK_NOT_CONTROLLED"
      },
      "onItemNotControlled": {
        "status": "NOT_CONTROLLED"
      }
    }
  ],
  "columnForTestsWithoutTestedItem": "validationTest"
}

The example above will result into a Traceability page similar to:

Configuring status definitions, row status messages and cell status messages

A user may configure any kind of status that should be represented in an RTM cell, and may be applied through traceability checks.

They are identified by a capitalized status name (e.g. REQUIREMENT_MISSING) and are later on transformed to a human readable representation via the rowStatusMessages and cellStatusMessages configurations.

Example:

{
  "statusDefinitions": {
    "REQUIREMENT_MISSING": {
      "level": "ERROR"
    }
  },
  "rowStatusMessages": {
    "REQUIREMENT_MISSING": {
      "message": {
        "subject": "Use case",
        "description": "not covered by a design input."
      }
    },
  },
  "cellStatusMessages": [
    {
      "if": [
        "REQUIREMENT_MISSING"
      ],
      "message": "Requirement missing"
    }
  ]
}

Based on the example above, the following behavior will apply:

  1. A new status called REQUIREMENT_MISSING is defined. This status may be referred to in a traceability check later on

  2. A cell with a status of REQUIREMENT_MISSING will show a "Requirement missing"

  3. A traceability row containing a cell with a REQUIREMENT_MISSING status, it will show a status message "1 Use case not covered by a design input", or "X Use cases not covered by a design input" in case there are multiple cells (the message subject will be automatically pluralized based on the cell count)

Available status levels

  • ERROR: Critical traceability error that needs to be addressed to fulfill all traceability criteria

  • WARNING: Warning or problem that doesn't necessarily concern the fulfillment critiera of the traceability matrix, but still require attention to get all items to a releaseable state (e.g. missing controlled state)

  • MUTED: The status is used for information purposes (such as, test cases that are not part of a test plan). This status has no impact on the fulfillment criteria of the traceability matrix.

Compound cell status messages

Depending on the configured traceability checks, a cell may be subject of one or more traceability statuses. A cell message may be based on a particular set or combination of statuses.

{
  "cellStatusMessages": [
      {
        "if": [
          "SPEC_MISSING",
          "VERIFICATION_TEST_MISSING"
        ],
        "message": "Specification and test cases missing"
      },
      {
        "if": [
          "SPEC_MISSING"
        ],
        "message": "Specification missing"
      }
  ]
}

Based on the example above, the following behavior will apply:

  1. A cell with a status of SPEC_MISSING and VERIFICATION_TEST_MISSING will result in the message "Specifiation and test cases missing"

  2. A cell with SPEC_MISSING as its only state will result in the message "Specification missing"

A cell must match all defined if conditions to be applied. Also, the order of the cell status message array is significant, i.e. the first if condition in the array that matches the criteria will be used. It is recommended to put more specific messages (multiple if status conditions) before less specific ones.

Configuring columns

The traceability configuration allows flexible definitions of one or more design and testing columns.

Design columns

A design column represents a set of design control items that are put into relation with items of another design column.

Example:

{
  "columns": [
    {
      "columnId": "useCase",
      "title": "Use cases",
      "kind": "design",
      "itemFilter": "type:RQ and \"Requirement type\":\"Use case\"",
      "matchCrossReferences": false,
    },
    {
      "columnId": "designInput",
      "title": "Design Input",
      "kind": "design",
      "itemFilter":
        "type:RQ and not \"Requirement type\":\"Use case\"",
      "relations": [
        {
          "kind": "indirect",
          "relationType": "HAS_PARENT",
          "referencedColumnId": "useCase",
        },
      ],
      "matchCrossReferences": false,
    },
    {
      "columnId": "designOutput",
      "title": "Design Output",
      "kind": "design",
      "itemFilter": "type:SW or type:HW",
      "relations": [
        {
          "kind": "direct",
          "relationType": "FULFILLS",
          "referencedColumnId": "designInput",
        },
      ],
      "matchCrossReferences": false,
    }
  ]
}

The snippet above defines a use case, design input and design output column, whereas a use case may be a parent of a design input and a design output may fulfill a design input.

  • The title field describes the column title represented in the table

  • The itemFilter field describes all the matching items for this particular column, represented as a KQL query

  • The relations field describes all potential relations from the given column, to any other design column

  • The matchCrossReferences field describes if the itemFilter will also match items from configured referenced projects.

Supported relation kinds

  • direct: All items that directly relate to the given design item

  • indirect: All items that are indirectly related to the given design item through its relations (including direct relations)

Supported relation types

  • FULFILLS: Specification fulfilling a Requirement

  • HAS_PARENT: Specification/Requirement having another Specification/Requirement as a parent

Testing columns

A testing column represents a list of Test cases and/or detected Automated tests testing items of one or more design columns.

{
  "columns": [
    { 
      "columnId": "verificationTest",
      "title": "Verification test",
      "kind": "testing",
      "testedItemFilter": "type:SW or type:HW",
      "referencedColumnIds": [
        "designOutput"
      ],
      "matchCrossReferences": false
    },
    {
      "columnId": "validationTest",
      "title": "Validation test",
      "kind": "testing",
      "testedItemFilter": "type:RQ",
      "referencedColumnIds": [
        "designInput"
      ],
      "matchCrossReferences": false
    }
  ]
}

The snippet above configures two columns to represent verification tests and validation tests.

  • The testedItemFilter describes the set of items within the referenced columns to be matched as a KQL query

  • The referencedColumnIds describes the design columns of which items could be targeted by the tests represented in the testing column

  • The matchCrossReferences configuration describes if the testing column will also list any Test Case and Automated Test that traces to a design item that is part of a cross-referenced item (note that this requires matchCrossReferences to be enabled for the referenced column as well to have an observable effect). If enabled, the testing column will also show any dangling Test Case from cross-referenced projects as well.

Configuring traceability checks

Based on the previously defined statusDefinitions, the Traceability matrix provides ways to enforce traceability checks for design item coverage, test coverage and controlled item state.

Traceability checks are represented as traceability check cards on the top of the Traceability page to indicate progress and to allow effective filtering for items that fail the check.

All of the checks share a common set of fields that will be available to all types:

  • The title field represents the big title in the traceability check card

  • The subtitle field represents a small title under the title to give more context on the check

  • The filterDescription field represents the message shown when selecting/filtering the RTM for a particular check

Coverage check

Checks for items of one or more columns to be covered by items of another column.

Example:

{
  "checks": [
    {
      "checkId": "useCasesCovered",
      "kind": "coverage",
      "title": "Use cases",
      "subtitle": "Covered by design input",
      "filterDescription": "use cases not covered by a design input",
      "columnIds": [
        "useCase"
      ],
      "coveredByColumnId": "designInput",
      "checkCrossReferences": false,
      "onFail": {
        "status": "REQUIREMENT_MISSING"
      }
    }
  ]
}

The snippet above defines a "Use cases" traceability check that will check the items of a "Use case" column to be covered by an item that's part of the "Design input" column.

  • The columnIds field describes the columns that are target of a coverage check (i.e. what items within those columns are not covered by at least one item by the coveredByColumnId column?)

  • The coveredByColumnId field defines the column containing the items that potentially cover items in the columnIds columns

  • The checkCrossReferences field describes if a check will also check any item of a cross-referenced project to be covered by the defined column. This option only has an effect if the checked columnIds have matchCrossReferences enabled.

  • The onFail field describes the traceability status that will be applied to all items that are not covered by any item in the coveredByColumnId column

Test executions check

Checks for Test cases without a Test execution or for Test executions without any test result attached.

{
  "checks": [
    {
      "checkId": "testExecutions",
      "kind": "testExecutions",
      "title": "Test executions",
      "subtitle": "Created within test plan",
      "filterDescription": "test cases missing a test execution or test result",
      "onMissingManualExecution": {
        "status": "MANUAL_TEST_EXECUTION_MISSING"
      },
      "onMissingExecution": {
        "status": "TEST_EXECUTION_MISSING"
      },
      "onMissingExecutionResult": {
        "status": "TEST_EXECUTION_MISSING_RESULT"
      },
      "onNotInTestPlan": {
        "status": "NOT_INCLUDED_IN_TEST_PLAN"
      }
    }
  ]
}

The snippet above defines a "Test executions" traceability check that will mark any Test case with a corresponding status.

  • The onMissingManualExecution field describes a status that will be applied to all Automated tests that are missing a manual test execution (given the corresponding project setting "Require a manual test execution for each effective automated test execution" has been enabled)

  • The onMissingExecution field describes a status that will be applied to all Test cases that are missing a related Test execution item

  • The onMissingExecutionResult field describes a status that will be applied to all Test cases that have a related Test execution, but without a concrete "Test result" value set

  • The onNotInTestPlan field describes a status that will be applied to all Test cases that are considered not part of the test plan (usually this status is configured as a MUTED status, since those tests should not impact the final RTM traceability result)

Failed tests check

Checks for any potential failed test results in all the detected Test cases and Automated tests.

{
  "checks": [
    {
      "checkId": "failedTests",
      "kind": "failedTests",
      "title": "Failing tests",
      "subtitle": "Within test plan",
      "filterDescription": "failing test executions",
      "onFail": {
        "status": "TEST_EXECUTION_FAILED"
      }
    }
  ]
}
  • The onFail field describes a status that will be applied to all Test cases or Automated tests that are part of the test plan and have a "Test result" value set to "Failed"

Controlled check

Checks for any item shown in the RTM that is not yet controlled.

This check will only show the controlled states of items shown in the RTM, not necessarily all the items that are part of a release. To get a reliable overview of the controlled state of all the items, use the "Items" page instead.

{
    "checks": [
      {
        "checkId": "allItemsControlled",
        "kind": "controlled",
        "title": "Controlled",
        "subtitle": "Items fully approved",
        "filterDescription": "uncontrolled items that need to be approved",
        "checkCrossReferences": false,
        "onTestExecutionNotControlled": {
          "status": "TEST_EXECUTION_NOT_CONTROLLED"
        },
        "onRiskNotControlled": {
          "status": "RISK_NOT_CONTROLLED"
        },
        "onItemNotControlled": {
          "status": "NOT_CONTROLLED"
        }
      }
    ]
}
  • The onTestExecutionNotControlled field describes a status that will be applied to all detected Test cases with at least one Test execution that is not fully controlled yet

  • The onRiskNotControlled field describes a status that will be applied to any item with a detected Risk that is not fully controlled yet

  • The onItemNotControlled field describes a status that will be applied to any item that is not controlled yet

Last updated

© 2024 Ketryx Corporation