GAMP – Test Incidents – Analysis, Logging & Classification
Incident Analysis and Logging

This article was written by Ilan Shaya a world specialist in validation, automation & control
When a test incident occurs during a particular step, the overall test should not be continued if the failure produces an output tat that prevents entry into a subsequent step. When a test continues after following a failure, the failed step should be clearly identified on the test results sheet
It is important to fully record the details of all new test incidents and maintain an index of these incidents
Test incidents may be fed either into an existing change control system or into a separate process for resolving test incidents. An example of an incident report (summarizing details of the incident, proposed solution and retest requirements, review, implementation and closure) is given in GAMP 4, Appendix D6
Test Incident Classification
In addition to correcting an identified fault, it is important to evaluate test incidents in order to determine their most likely cause. An important part of any Corrective & Preventive Action (CAPA) process is intended to address this issue. Metrics on the causes of avoidable test incidents provide a useful indicator of areas within the overall SW development life cycle that may benefit from improvement activities, to reduce the likelihood of recurrence.
Typical test incident types that occur in SW testing include, but are not limited, to those described below
Incorrect SW Installation
Errors such as program dumps, abnormal terminations, or inability to access applications, result often from a failure in the installation or configuration process, or the installation of a wrong SW version
When any of these errors is determined to be the cause of the incident, it is usually necessary to postpone any further testing until the test environment is correctly set up
Incorrect Programming/Coding
Incidents may result in actual system outputs failing to agree with required system outputs. These incidents should be noted, and, unless the defects are considered sufficiently important to invalidate the rest of the test steps, the execution of the test case can continue
Once the cause of a defect is identified, the defect should be corrected and the corrected SW included in a subsequent SW build for retesting
Incorrect Test Data
Testing failures may occur as a result of failure to create correct data in the test database, in advance of the test case being run
Inadequate Specification- Incorrect Understanding of Program Functionality
Testing failures may occur as a direct result of the fact that the controlling Design Specification does not state clearly enough what is required from a particular piece of functionality. This may be particularly evident when a custom system is developed to satisfy a new business process that may not be fully established yet
Poorly Specified Test Case/Script
Tests can fail if the Test Case or Test Script (or other relevant document) produced is incorrect and indicates an outcome different than documented in the corresponding requirements
When a Test Case or Test Script has been modified during execution, a test incident should be raised to manage changes to the Test Case or Test Script, and to confirm the pass/fail status of the test
Incidence of this type of error should be minimized by ensuring independent review of the test case before approval, including a cross check of the expected output as specified in the controlling requirement.
Incorrect Design Solution
Test errors can arise where the SW may work correctly against design, but the design implemented does not satisfy the original stated requirements, or fail to reflect any subsequently agreed change requests
Inconsistent Controlling Design Specification
Test incidents can occur where the content of the relevant controlling Design Specification contains inconsistencies. It is, therefore essential that this specification is corrected to prevent further confusion
This incident type should not be confused with Incorrect Programming/Coding, where the SW coded does not match a particular requirement of the controlling Design Specification, and the code needs to be corrected
The controlling Design Specification inconsistencies should be logged in the incident management system so the specification can be corrected following the appropriate document management system
Unexpected Test Events
During execution of a Test Case or Test Script, the tester may notice an anomaly in the SW that, although not affecting the success of the overall test objective, nevertheless requires further investigation. This event should be recorded in the incident management system in order that the controlling Design Specification, and the corresponding Test Case or Test Script can be updated to reflect the presence of the anomaly
Test Execution Errors
Test can be classified as failures if the tester fails to follow the steps outlined in the Test Script, or the overall Test Protocol or Test Specification governing that activity
Missing signatures and timestamps for dates, and other important cross reference information is another area that could cause a test to be considered a failure
Force Majeure
Test incidents of this nature reflect an unexpected event over which the test or project team have no control, and which brings testing to a premature halt. These events can be raised as issues by the project team, but are generally resolved outside of the project


