Traceability Configuration
Reference of the Ketryx Traceability Configuration
Ketryx allows full customization for the displayed columns, column relationships / traces, traceability checks, including their corresponding error status messages, and more.
This page acts as a reference for all the various configurations possible beyond the default traceability setup described in MAN-07 Traceability.
How to configure
The traceability configuration can be configured in the Advanced Settings >> Traceability configuration field and needs to be stored as a valid JSON structure.
Base example
The following example represents the default configuration for the strictest project Ketryx schema and will be used to highlight possible configurations in the system.
{
"version": "3",
"rtmApprovalEnabled": false,
"enforceGreenCheck": true,
"defaultColumnId": "designInput",
"statusDefinitions": {
"REQUIREMENT_MISSING": {
"level": "ERROR"
},
"SPEC_MISSING": {
"level": "ERROR"
},
"MANUAL_TEST_EXECUTION_MISSING": {
"level": "ERROR"
},
"TEST_EXECUTION_MISSING": {
"level": "ERROR"
},
"TEST_EXECUTION_MISSING_RESULT": {
"level": "ERROR"
},
"VERIFICATION_TEST_MISSING": {
"level": "ERROR"
},
"VALIDATION_TEST_MISSING": {
"level": "ERROR"
},
"TEST_EXECUTION_FAILED": {
"level": "ERROR"
},
"NOT_INCLUDED_IN_TEST_PLAN": {
"level": "MUTED"
},
"TEST_EXECUTION_NOT_CONTROLLED": {
"level": "WARNING"
},
"RISK_NOT_CONTROLLED": {
"level": "WARNING"
},
"NOT_CONTROLLED": {
"level": "WARNING"
}
},
"rowStatusMessages": {
"REQUIREMENT_MISSING": {
"message": {
"subject": "Use case",
"description": "not covered by a design input."
}
},
"SPEC_MISSING": {
"message": {
"subject": "Design input",
"description": "not covered by a design output."
}
},
"VERIFICATION_TEST_MISSING": {
"message": {
"subject": "Design output",
"description": "not covered by a verification test."
}
},
"VALIDATION_TEST_MISSING": {
"message": {
"subject": "Design input",
"description": "not covered by a validation test."
}
},
"MANUAL_TEST_EXECUTION_MISSING": {
"message": {
"subject": "Test Case",
"description": "without manual test executions for automated tests."
}
},
"TEST_EXECUTION_MISSING_RESULT": {
"message": {
"subject": "Test Execution",
"description": "without a test result."
}
},
"TEST_EXECUTION_MISSING": {
"message": {
"subject": "Test Case",
"description": "without a test execution."
}
},
"TEST_EXECUTION_FAILED": {
"message": {
"subject": "Test Execution",
"description": "failed."
}
},
"NOT_CONTROLLED": {
"aliases": [
"RISK_NOT_CONTROLLED",
"TEST_EXECUTION_NOT_CONTROLLED"
],
"message": {
"subject": "Item",
"description": "not fully approved yet."
}
}
},
"cellStatusMessages": [
{
"if": [
"REQUIREMENT_MISSING"
],
"message": "Requirement missing"
},
{
"if": [
"SPEC_MISSING",
"VERIFICATION_TEST_MISSING",
"VALIDATION_TEST_MISSING"
],
"message": "Specification and test cases missing"
},
{
"if": [
"SPEC_MISSING",
"VERIFICATION_TEST_MISSING"
],
"message": "Specification and test case missing"
},
{
"if": [
"SPEC_MISSING",
"VALIDATION_TEST_MISSING"
],
"message": "Specification and test case missing"
},
{
"if": [
"SPEC_MISSING"
],
"message": "Specification missing"
},
{
"if": [
"VERIFICATION_TEST_MISSING"
],
"message": "Verification test case missing"
},
{
"if": [
"VALIDATION_TEST_MISSING"
],
"message": "Validation test case missing"
},
{
"if": [
"NOT_INCLUDED_IN_TEST_PLAN"
],
"message": "Not included in test plan"
},
{
"if": [
"TEST_EXECUTION_MISSING_RESULT"
],
"message": "Test execution result missing"
},
{
"if": [
"TEST_EXECUTION_MISSING"
],
"message": "Test execution missing"
},
{
"if": [
"MANUAL_TEST_EXECUTION_MISSING"
],
"message": "Manual test execution missing for automated test"
},
{
"if": [
"TEST_EXECUTION_FAILED"
],
"message": "Test execution failed"
},
{
"if": [
"TEST_EXECUTION_NOT_CONTROLLED"
],
"message": "Missing approval for Manual Test Execution"
},
{
"if": [
"TEST_EXECUTION_NOT_CONTROLLED"
],
"message": "Missing approval for Manual Test Execution"
},
{
"if": [
"RISK_NOT_CONTROLLED"
],
"message": "Missing approval for Risk"
},
{
"if": [
"NOT_CONTROLLED"
],
"message": "Missing approval"
}
],
"columns": [
{
"columnId": "useCase",
"title": "Use cases",
"kind": "design",
"itemFilter": "type:RQ and \"Requirement type\":\"Use case\"",
"matchCrossReferences": false
},
{
"columnId": "designInput",
"title": "Design Input",
"kind": "design",
"itemFilter": "type:RQ and not \"Requirement type\":\"Use case\" and not \"Requirement type\":\"Intended use\"",
"relations": [
{
"kind": "indirect",
"relationType": "HAS_PARENT",
"referencedColumnId": "useCase"
}
],
"matchCrossReferences": false
},
{
"columnId": "designOutput",
"title": "Design Output",
"kind": "design",
"itemFilter": "type:SW or type:HW",
"relations": [
{
"kind": "direct",
"relationType": "FULFILLS",
"referencedColumnId": "designInput"
}
],
"matchCrossReferences": false
},
{
"columnId": "verificationTest",
"title": "Verification test",
"kind": "testing",
"testedItemFilter": "type:SW or type:HW",
"referencedColumnIds": [
"designOutput"
],
"matchCrossReferences": false
},
{
"columnId": "validationTest",
"title": "Validation test",
"kind": "testing",
"testedItemFilter": "type:RQ",
"referencedColumnIds": [
"designInput"
],
"matchCrossReferences": false
}
],
"checks": [
{
"checkId": "useCasesCovered",
"kind": "coverage",
"title": "Use cases",
"subtitle": "Covered by design input",
"filterDescription": "use cases not covered by a design input",
"columnIds": [
"useCase"
],
"coveredByColumnId": "designInput",
"checkCrossReferences": false,
"onFail": {
"status": "REQUIREMENT_MISSING"
}
},
{
"checkId": "designInputsCovered",
"kind": "coverage",
"title": "Design input",
"subtitle": "Covered by design outputs",
"filterDescription": "design inputs not covered by a design output",
"columnIds": [
"designInput"
],
"coveredByColumnId": "designOutput",
"checkCrossReferences": false,
"onFail": {
"status": "SPEC_MISSING"
}
},
{
"checkId": "verificationTestCoverage",
"kind": "coverage",
"title": "Verification",
"subtitle": "Design outputs covered by tests",
"filterDescription": "design outputs not covered by a verification test case",
"columnIds": [
"designOutput"
],
"coveredByColumnId": "verificationTest",
"checkCrossReferences": false,
"onFail": {
"status": "VERIFICATION_TEST_MISSING"
}
},
{
"checkId": "validationTestCoverage",
"kind": "coverage",
"title": "Validation",
"subtitle": "Design inputs covered by tests",
"filterDescription": "design inputs not covered by a validation test case",
"columnIds": [
"designInput"
],
"coveredByColumnId": "validationTest",
"checkCrossReferences": false,
"onFail": {
"status": "VALIDATION_TEST_MISSING"
}
},
{
"checkId": "testExecutions",
"kind": "testExecutions",
"title": "Test executions",
"subtitle": "Created within test plan",
"filterDescription": "test cases missing a test execution or test result",
"onMissingManualExecution": {
"status": "MANUAL_TEST_EXECUTION_MISSING"
},
"onMissingExecution": {
"status": "TEST_EXECUTION_MISSING"
},
"onMissingExecutionResult": {
"status": "TEST_EXECUTION_MISSING_RESULT"
},
"onNotInTestPlan": {
"status": "NOT_INCLUDED_IN_TEST_PLAN"
}
},
{
"checkId": "failedTests",
"kind": "failedTests",
"title": "Failing tests",
"subtitle": "Within test plan",
"filterDescription": "failing test executions",
"onFail": {
"status": "TEST_EXECUTION_FAILED"
}
},
{
"checkId": "allItemsControlled",
"kind": "controlled",
"title": "Controlled",
"subtitle": "Items fully approved",
"filterDescription": "uncontrolled items that need to be approved",
"checkCrossReferences": false,
"onTestExecutionNotControlled": {
"status": "TEST_EXECUTION_NOT_CONTROLLED"
},
"onRiskNotControlled": {
"status": "RISK_NOT_CONTROLLED"
},
"onItemNotControlled": {
"status": "NOT_CONTROLLED"
}
}
],
"columnForTestsWithoutTestedItem": "validationTest"
}
The example above will result into a Traceability page similar to:

Configuring status definitions, row status messages and cell status messages
A user may configure any kind of status that should be represented in an RTM cell, and may be applied through traceability checks.
They are identified by a capitalized status name (e.g. REQUIREMENT_MISSING
) and are later on transformed to a human readable representation via the rowStatusMessages
and cellStatusMessages
configurations.
Example:
{
"statusDefinitions": {
"REQUIREMENT_MISSING": {
"level": "ERROR"
}
},
"rowStatusMessages": {
"REQUIREMENT_MISSING": {
"message": {
"subject": "Use case",
"description": "not covered by a design input."
}
},
},
"cellStatusMessages": [
{
"if": [
"REQUIREMENT_MISSING"
],
"message": "Requirement missing"
}
]
}
Based on the example above, the following behavior will apply:
A new status called
REQUIREMENT_MISSING
is defined. This status may be referred to in a traceability check later onA cell with a status of
REQUIREMENT_MISSING
will show a "Requirement missing"A traceability row containing a cell with a
REQUIREMENT_MISSING
status, it will show a status message "1 Use case not covered by a design input", or "X Use cases not covered by a design input" in case there are multiple cells (the message subject will be automatically pluralized based on the cell count)
Available status levels
ERROR
: Critical traceability error that needs to be addressed to fulfill all traceability criteriaWARNING
: Warning or problem that doesn't necessarily concern the fulfillment critiera of the traceability matrix, but still require attention to get all items to a releaseable state (e.g. missing controlled state)MUTED
: The status is used for information purposes (such as, test cases that are not part of a test plan). This status has no impact on the fulfillment criteria of the traceability matrix.
Compound cell status messages
Depending on the configured traceability checks, a cell may be subject of one or more traceability statuses. A cell message may be based on a particular set or combination of statuses.
{
"cellStatusMessages": [
{
"if": [
"SPEC_MISSING",
"VERIFICATION_TEST_MISSING"
],
"message": "Specification and test cases missing"
},
{
"if": [
"SPEC_MISSING"
],
"message": "Specification missing"
}
]
}
Based on the example above, the following behavior will apply:
A cell with a status of
SPEC_MISSING
andVERIFICATION_TEST_MISSING
will result in the message "Specifiation and test cases missing"A cell with
SPEC_MISSING
as its only state will result in the message "Specification missing"
Configuring columns
The traceability configuration allows flexible definitions of one or more design and testing columns.
Design columns
A design column represents a set of design control items that are put into relation with items of another design column.
Example:
{
"columns": [
{
"columnId": "useCase",
"title": "Use cases",
"kind": "design",
"itemFilter": "type:RQ and \"Requirement type\":\"Use case\"",
"matchCrossReferences": false,
},
{
"columnId": "designInput",
"title": "Design Input",
"kind": "design",
"itemFilter":
"type:RQ and not \"Requirement type\":\"Use case\"",
"relations": [
{
"kind": "indirect",
"relationType": "HAS_PARENT",
"referencedColumnId": "useCase",
},
],
"matchCrossReferences": false,
},
{
"columnId": "designOutput",
"title": "Design Output",
"kind": "design",
"itemFilter": "type:SW or type:HW",
"relations": [
{
"kind": "direct",
"relationType": "FULFILLS",
"referencedColumnId": "designInput",
},
],
"matchCrossReferences": false,
}
]
}
The snippet above defines a use case, design input and design output column, whereas a use case may be a parent of a design input and a design output may fulfill a design input.
The
title
field describes the column title represented in the tableThe
itemFilter
field describes all the matching items for this particular column, represented as a KQL queryThe
relations
field describes all potential relations from the given column, to any other design columnThe
matchCrossReferences
field describes if theitemFilter
will also match items from configured referenced projects.
Supported relation kinds
direct
: All items that directly relate to the given design itemindirect
: All items that are indirectly related to the given design item through its relations (including direct relations)
Supported relation types
FULFILLS
: Specification fulfilling a RequirementHAS_PARENT
: Specification/Requirement having another Specification/Requirement as a parent
Testing columns
A testing column represents a list of Test cases and/or detected Automated tests testing items of one or more design columns.
{
"columns": [
{
"columnId": "verificationTest",
"title": "Verification test",
"kind": "testing",
"testedItemFilter": "type:SW or type:HW",
"referencedColumnIds": [
"designOutput"
],
"matchCrossReferences": false
},
{
"columnId": "validationTest",
"title": "Validation test",
"kind": "testing",
"testedItemFilter": "type:RQ",
"referencedColumnIds": [
"designInput"
],
"matchCrossReferences": false
}
]
}
The snippet above configures two columns to represent verification tests and validation tests.
The
testedItemFilter
describes the set of items within the referenced columns to be matched as a KQL queryThe
referencedColumnIds
describes the design columns of which items could be targeted by the tests represented in the testing columnThe
matchCrossReferences
configuration describes if the testing column will also list any Test Case and Automated Test that traces to a design item that is part of a cross-referenced item (note that this requiresmatchCrossReferences
to be enabled for the referenced column as well to have an observable effect). If enabled, the testing column will also show any dangling Test Case from cross-referenced projects as well.
Configuring traceability checks
Based on the previously defined statusDefinitions
, the Traceability matrix provides ways to enforce traceability checks for design item coverage, test coverage and controlled item state.
Traceability checks are represented as traceability check cards on the top of the Traceability page to indicate progress and to allow effective filtering for items that fail the check.
All of the checks share a common set of fields that will be available to all types:
The
title
field represents the big title in the traceability check cardThe
subtitle
field represents a small title under the title to give more context on the checkThe
filterDescription
field represents the message shown when selecting/filtering the RTM for a particular check
Coverage check
Checks for items of one or more columns to be covered by items of another column.
Example:
{
"checks": [
{
"checkId": "useCasesCovered",
"kind": "coverage",
"title": "Use cases",
"subtitle": "Covered by design input",
"filterDescription": "use cases not covered by a design input",
"columnIds": [
"useCase"
],
"coveredByColumnId": "designInput",
"checkCrossReferences": false,
"onFail": {
"status": "REQUIREMENT_MISSING"
}
}
]
}
The snippet above defines a "Use cases" traceability check that will check the items of a "Use case" column to be covered by an item that's part of the "Design input" column.
The
columnIds
field describes the columns that are target of a coverage check (i.e. what items within those columns are not covered by at least one item by thecoveredByColumnId
column?)The
coveredByColumnId
field defines the column containing the items that potentially cover items in thecolumnIds
columnsThe
checkCrossReferences
field describes if a check will also check any item of a cross-referenced project to be covered by the defined column. This option only has an effect if the checkedcolumnIds
havematchCrossReferences
enabled.The
onFail
field describes the traceability status that will be applied to all items that are not covered by any item in thecoveredByColumnId
column
Test executions check
Checks for Test cases without a Test execution or for Test executions without any test result attached.
{
"checks": [
{
"checkId": "testExecutions",
"kind": "testExecutions",
"title": "Test executions",
"subtitle": "Created within test plan",
"filterDescription": "test cases missing a test execution or test result",
"onMissingManualExecution": {
"status": "MANUAL_TEST_EXECUTION_MISSING"
},
"onMissingExecution": {
"status": "TEST_EXECUTION_MISSING"
},
"onMissingExecutionResult": {
"status": "TEST_EXECUTION_MISSING_RESULT"
},
"onNotInTestPlan": {
"status": "NOT_INCLUDED_IN_TEST_PLAN"
}
}
]
}
The snippet above defines a "Test executions" traceability check that will mark any Test case with a corresponding status.
The
onMissingManualExecution
field describes a status that will be applied to all Automated tests that are missing a manual test execution (given the corresponding project setting "Require a manual test execution for each effective automated test execution" has been enabled)The
onMissingExecution
field describes a status that will be applied to all Test cases that are missing a related Test execution itemThe
onMissingExecutionResult
field describes a status that will be applied to all Test cases that have a related Test execution, but without a concrete "Test result" value setThe
onNotInTestPlan
field describes a status that will be applied to all Test cases that are considered not part of the test plan (usually this status is configured as aMUTED
status, since those tests should not impact the final RTM traceability result)
Failed tests check
Checks for any potential failed test results in all the detected Test cases and Automated tests.
{
"checks": [
{
"checkId": "failedTests",
"kind": "failedTests",
"title": "Failing tests",
"subtitle": "Within test plan",
"filterDescription": "failing test executions",
"onFail": {
"status": "TEST_EXECUTION_FAILED"
}
}
]
}
The
onFail
field describes a status that will be applied to all Test cases or Automated tests that are part of the test plan and have a "Test result" value set to "Failed"
Controlled check
Checks for any item shown in the RTM that is not yet controlled.
{
"checks": [
{
"checkId": "allItemsControlled",
"kind": "controlled",
"title": "Controlled",
"subtitle": "Items fully approved",
"filterDescription": "uncontrolled items that need to be approved",
"checkCrossReferences": false,
"onTestExecutionNotControlled": {
"status": "TEST_EXECUTION_NOT_CONTROLLED"
},
"onRiskNotControlled": {
"status": "RISK_NOT_CONTROLLED"
},
"onItemNotControlled": {
"status": "NOT_CONTROLLED"
}
}
]
}
The
onTestExecutionNotControlled
field describes a status that will be applied to all detected Test cases with at least one Test execution that is not fully controlled yetThe
onRiskNotControlled
field describes a status that will be applied to any item with a detected Risk that is not fully controlled yetThe
onItemNotControlled
field describes a status that will be applied to any item that is not controlled yet
Note on legacy configurations
The configuration described in this document requires {"version": "3"}
to be configured. If version
is not set or {"version": "2"}
is set, refer to MAN-07 Traceability.
Version 2 vs 3
Last updated
Was this helpful?