Smartlogic

Test Incidents – Analysis, Logging & Classification

                   GAMP – Test Incidents – Analysis, Logging & Classification

Incident Analysis and Logging

אילן שעיה ilan Shaya
Ilan Shaya CEO , control and automation specialist and designer

This article was written by Ilan Shaya a world specialist in validation, automation & control

When a test incident occurs during a particular step, the overall test should not be continued if the failure produces an output tat that prevents entry into a subsequent step. When a test continues after following a failure, the failed step should be clearly identified on the test results sheet

It is important to fully record the details of all new test incidents and maintain an index of these incidents

Test incidents may be fed either into an existing change control system or into a separate process for resolving test incidents. An example of an incident report (summarizing details of the incident, proposed solution and retest requirements, review, implementation and closure) is given in GAMP 4, Appendix D6

                    Test Incident Classification

In addition to correcting an identified fault, it is important to evaluate test incidents in order to determine their most likely cause. An important part of any Corrective & Preventive Action (CAPA) process is intended to address this issue. Metrics on the causes of avoidable test incidents provide a useful indicator of areas within the overall SW development life cycle that may benefit from improvement activities, to reduce the likelihood of recurrence.

Typical test incident types that occur in SW testing include, but are not limited, to those described below

              Incorrect SW Installation

Errors such as program dumps, abnormal terminations, or inability to access applications, result often from a failure in the installation or configuration process, or the installation of a wrong SW version

When any of these errors is determined to be the cause of the incident, it is usually necessary to postpone any further testing until the test environment is correctly set up

              Incorrect Programming/Coding

Incidents may result in actual system outputs failing to agree with required system outputs. These incidents should be noted, and, unless the defects are considered sufficiently important to invalidate the rest of the test steps, the execution of the test case can continue

Once the cause of a defect is identified, the defect should be corrected and the corrected SW included in a subsequent SW build for retesting

               Incorrect Test Data

Testing failures may occur as a result of failure to create correct data in the test database, in advance of the test case being run

               Inadequate Specification- Incorrect Understanding of Program Functionality

Testing failures may occur as a direct result of the fact that the controlling Design Specification does not state clearly enough what is required from a particular piece of functionality. This may be particularly evident when a custom system is developed to satisfy a new business process that may not be fully established yet

               Poorly Specified Test Case/Script

Tests can fail if the Test Case or Test Script (or other relevant document) produced is incorrect and indicates an outcome different than documented in the corresponding requirements

When a Test Case or Test Script has been modified during execution, a test incident should be raised to manage changes to the Test Case or Test Script, and to confirm the pass/fail status of the test

Incidence of this type of error should be minimized by ensuring independent review of the test case before approval, including a cross check of the expected output as specified in the controlling requirement.

                Incorrect Design Solution

Test errors can arise where the SW may work correctly against design, but the design implemented does not satisfy the original stated requirements, or fail to reflect any subsequently agreed change requests

               Inconsistent Controlling Design Specification

Test incidents can occur where the content of the relevant controlling Design Specification contains inconsistencies. It is, therefore essential that this specification is corrected to prevent further confusion

This incident type should not be confused with Incorrect Programming/Coding, where the SW coded does not match a particular requirement of the controlling Design Specification, and the code needs to be corrected

The controlling Design Specification inconsistencies should be logged in the incident management system so the specification can be corrected following the appropriate document management system

           Unexpected Test Events

During execution of a Test Case or Test Script, the tester may notice an anomaly in the SW that, although not affecting the success of the overall test objective, nevertheless requires further investigation. This event should be recorded in the incident management system in order that the controlling Design Specification, and the corresponding Test Case or Test Script can be updated to reflect the presence of the anomaly

             Test Execution Errors

Test can be classified as failures if the tester fails to follow the steps outlined in the Test Script, or the overall Test Protocol or Test Specification governing that activity

Missing signatures and timestamps for dates, and other important cross reference information is another area that could cause a test to be considered a failure

               Force Majeure

Test incidents of this nature reflect an unexpected event over which the test or project team have no control, and which brings testing to a premature halt. These events can be raised as issues by the project team, but are generally resolved outside of the project

ולידציה – GAMP – Definition of Terms

Good Automated Manufacturing Practice (GAMP) Definition of Terms

Definition of Terms Used in Testing Environments

This document provides a definition of a set of testing terms used within pharmaceutical and other life sciences (consistent with those used in GAMP 4), Information Technology (IT) industries, and control and automation industries, in order to facilitate understanding testing environments.

It is recommended that a definition of consistent testing terms should be prepared on an organizational or project basis, where members of User and Supplier organizations work together. It is helpful to agree on these testing terms definition prior to contract signing, to ensure that contractual issues are based on common understanding of activities and milestones.

:The definition of terms listed below is based on three sources

GAMP 4 Definition from the GAMP Guide for the Validation of Automated Systems

IEEE Definition from IEEE 100, the Authoritative Dictionary of IEEE Standard Terms

BCS Definition from Working Draft: Glossary of terms used in software testing, version 6.2, produced by the British Computer Society Interest Group in Software Testing – BCS SIGIST

Terms and Definitions

Terminology Definition Source
Acceptance Criteria Criteria that a system or component must satisfy in order to be accepted by the User, customer or other authorized entity  GAMP 4 , IEEE
Acceptance Test

Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, and to enable the customer to determine whether or not to accept the system

See also Factory Acceptance Test (FAT) and
Site Acceptance Test SAT

 ,GAMP 4 IEEE
Black Box Testing See Functional Testing IEEE
Boundary Condition Testing Testing for correct operation when one or more variables are at a limiting value or a value at the edge of the domain of interest IEEE
Calibration Set of operation that establish, under specified conditions, the relationship between values indicated by a measuring instrument or system, or values represented by a material measure or a reference material, and the corresponding values of a quantity realized by a reference standard  GAMP 4,   ISO 10012

 

 

Terminology Definition Source
Challenge Testing Testing to check system behavior under abnormal conditions. Can include stress testing and deliberate challenges, e.g., to the security access system, data formatting rules, possible combinations of operator's actions, etc
Commissioning Process of providing to the appropriate components the information necessary for the designed communication between them IEEE
Emulation A model that accepts the same inputs and produces the same outputs as the given system IEEE
Environmental Testing Testing that evaluates system or component performance up to the specified limits of environmental parameters, e.g., temperature, humidity or pressure
Firmware FW Combination of hardware (HW) device, computer instructions and data that reside as read-only software (SW) on that device IEEE
Factory Acceptance Test FAT

Acceptance Test in the Supplier's Factory, usually involving the customer

See also Factory Acceptance Contrast to Site Acceptance Test – SAT

GAMP 4 , IEEE
Functional Testing Testing that ignores the internal mechanism of a system or component, and focuses solely on the outputs generated in response to input and execution conditions
Also known as Black Box Testing
GAMP 4 , IEEE
Hardware- HW

Physical equipment used to process, store, or transmit computer programs or data

Physical equipment used in data processing, as opposed to programs, procedures, rules, and associated documentation

IEEE
HW Testing Testing carried out to verify the correct operation of system HW independent of any custom application SW IEEE
Installation Qualification – IQ Documented verification that a system is installed according to written and pre-approved specifications  GAMP 4 , IEEE
Integration

Process of combining SW components, HW components, or both, into an overall system

Sometimes described as SW Integration and System Integration, respectively

IEEE

 

Terminology Definition Source
Integration Testing

(1) Testing in which SW components, HW components, or both, are combined and tested to evaluate the integration between them

(2) Orderly progression of testing of incremental pieces of the SW program, in which SW elements, HW elements, or both, are combined and tested until the entire system has been integrated to show compliance with the program designed, and the system capabilities and requirements

IEEE
Instance Single installation of a SW application (plus associated databases, tools and utilities). Usually applied to configurable IT systems
Load Testing Stress testing conducted to evaluate a system or component up to the limits of its specified requirements
Loop Testing Testing in which control system inputs and outputs are exercised and their functionality verified
Market Requirements Specification Statement of generic industry requirements used by the Supplier as an input to its product development life cycle IEEE
Middleware Combination of HW, computer instructions and data that provides infrastructure used by other system modules IEEE
Module Testing Testing of individual HW or SW components, or groups of related components IEEE
Negative Testing Testing aimed at showing that the SW does not work BCS
Operational and Support Testing

(1) Testing conducted to evaluate a system or component in an operational environment

(2) All testing required to verify system operation in accordance with design after the major component is energized or operated

IEEE
Operation Qualification- OQ Documented verification that a system operates according to written and pre-approved specifications throughout all specified operating range GAMP 4 , IEEE
Performance Qualification – PQ Documented verification that a system is capable of performing or controlling the activities of the processes it is required to perform or control, according to written and pre-approved specifications, whilst operating in its specified environment GAMP 4 , IEEE
Positive Testing Testing aimed at showing that the SW does meet the defined requirements BCS

 

 

Terminology Definition Source
Qualification Process to demonstrate the ability to fulfill specified requirements

GAMP 4

  ISO

Simulation A model that behave or operates like a given system when provided a set of given inputs IEEE
Site Acceptance Test – SAT

Acceptance Test at the customer's Site, usually involving the customer

See also Factory Acceptance Contrast to Site Acceptance Test – SAT

GAMP 4,

IEEE

Software – SW Computer programs, procedures, and associated documentation and data pertaining to the operation of a computer system IEEE
Stress Testing Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements IEEE
Structural Testing Examining the internal structure of the source code. Includes the low-level and high-level code review, path analysis, auditing of programming procedures, and standards actually used, inspection for extraneous "dead code", boundary analysis and other techniques. Requires specific computer science and programming expertise
Also known as White Box Testing
GAMP 4
Supplier Organization or person that provides a product   GAMP 4,  ISO
System Testing Testing conducted on a complete, integrated system to evaluate its compliance with its specified requirements GAMP 4
Test

 Procedure in which by executes an activity under specified conditions, the results are observed and recorded, and an evaluation is made of some aspect of that system or component

 Determination of one or more characteristics according to a procedure

,GAMP 4

ISO

GAMP 4

 IEEE

Test Case Set of test inputs, execution conditions and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement

GAMP 4 ,

 IEEE

Test Plan Document describing the scope, approach, resources and schedule of intended activities. It identifies test items, features to be tested, testing tasks, personnel en charged of each task, and risks requiring contingency planning GAMP 4

 

Terminology Definition Source
Test Procedure Detailed instructions for the setup, execution and evaluation of results for a given test case GAMP 4,   IEEE
Test Protocol See Test Specification IEEE
Test Script Documentation that specifies a sequence of actions for executing a given test GAMP 4 , IEEE
Test Specification Document describing the scope, management, use of procedures, sequencing, test environment, and prerequisites for a specific phase of testing
Test Strategy See Test Plan
Unit Testing Testing of individual HW or SW units or groups of related units IEEE
Usability Testing Testing the ease with which Users can learn and use a product BCS
User Person or persons who operate or interact directly with the system GAMP 4
Validation Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes GAMP 4 , FDA
Verification Confirmation, through the provision of objective evidence, that specified requirements have been fulfilled GAMP 4 ,    ISO
White Box Testing See Structural Testing IEEE

ולידציה – URS and FRS Preparation Overview

ולידציה – URS and FRS Preparation Overview

 This article was written by Iian Shaya, validation,automation and control expert

User Requirements Specification (URS) and Functional Requirements Specification (FRS) are the first and starting points of a validation process and a validation documentation file

  – The validation process must comply with regulations issued by the United States Food and Drug Administration FDA

:The FDA regulations that are most relevant to the validation process are

Good Manufacturing Practice  – GMP.

Current Good Manufacturing Practice – cGMP

Good Automated Manufacturing Practice – GAMP

The validation process includes design, installation and operation of a monitoring and control system for a production facility, as well as planning and execution of test procedures, to verify that a monitoring and control system meets the FDA standards

Validation documentation is part of the validation process that includes written and/or electronic records regarding the installation and operation of the monitoring and control system, and the corresponding test procedures for this system

Electronic records are often required to fulfill regulations set by the FDA. These regulations regard the scope and application of Part 11 of Title 21 of the Code of Federal Regulations; Electronic Records; Electronic Signatures (21 CFR Part 11). Electronic Records may contain any combination of text, graphics, audio, pictures, or other information represented in electronic form, which are created, modified, maintained, archived, retrieved or distributed by a computer system

Electronic Signatures may contain computer data compilation of any symbol or series of symbols executed, adopted or authorized by an individual to be legally binding equivalent of the individual's handwritten signature

Electronic records and signatures are generally used in Closed Systems, in which the system access is controlled by personnel responsible for the contents of the system electronic records

The responsibility for writing and approving the URS and FRS is shared in practice by the user, who operates the production facility, and the supplier or vendor, who provides the monitoring and control system for ensuring the proper operation of the production facility. Usually, the URS is written by the user and the FRS by the supplier

:Note

The final contents of the URS and FRS are tailored according to the type and size of the system under validation. Since the URS and FRS regarded herein are generic, they include requirements that may not be necessary in small or simple systems

 This article was written by Iian Shaya, validation,automation and control expert

ולידציה -Testing Process Automation Systems

ולידציה – Testing Process Automation Systems  

 Testing Responsibilities  Supplier and User

Where a Supplier has been assessed and its quality management system found to be acceptable, the User may benefit from the testing already carried out as part of the product development life cycle in order to minimize additional testing.

Similarly, the User may benefit from the testing carried out by the Supplier as part of the application, for example, to allow a small sample of tests to be repeated at witnessed FAT.

                        Example of Life Cycle for a Custom Application

Supplier Product Life Cycle

For example in a life cycle where a new application is developed for an embedded control system,  the Supplier's product may be SW or tools used to develop the User specific application. The Supplier of the control system has been assessed and its quality management system found to be acceptable. Thus, the User does not need to repeat the testing of product functionality. The testing should instead concentrate on the application of the control system

     End User Application Life Cycle

Assuming that the application is critical to product quality and includes a custom sequence coded as a sequence function chart, a test approach could be agreed :that includes the following elements

Supplier's module testing of the sequence function chart program.

Supplier's integration testing (100% test) and FAT (sampled) to demonstrate correct interaction configuration of the control system and correct operation of the process equipment.

SAT and IQ/OQ/PQ of the control system as part of the process equipment.

                      Example of Life Cycle for a Standard Application

                   Supplier Product Life Cycle

In the case  of a life cycle where a standard application is purchased containing an embedded control system, the control system Supplier has been assessed and its quality management system found to be acceptable. Thus, the User does not need to repeat the testing of product functionality (including mature library functions such as standard control modules) or of the standard application. The testing has to cover only the setup and use of the application.

                    End User Application Life Cycle

Assuming that the application is still critical to product quality, there is now much lower risk associated with the application development, as the configuration is limited to selecting the required functions and entering setup parameters. A test approach could, therefore, be agreed including the following elements:

FAT to demonstrate correct setup of the control system and correct operation of the process equipment.

SAT and IQ/OQ/PQ of the control system as part of the process equipment.

ולידציה – GAMP – Test Example – part 3

Good Automated Manufacturing Practice (GAMP) – Test Example

Testing Process Automation Systems

This article cover  the third part of  our  Good Automated Manufacturing Practice (GAMP) test example. This part will cover  the typical test phases for Factory Acceptance Test – FAT, Site Acceptance Test – SAT / Installation Qualification – IQ, Operation Qualification – OQ and Performance Qualification – PQ

Typical Test Phases – GAMP – Test Example

 Typical test phases for a complex process automation system. This example assumes that the system is configured by a Supplier and delivered to site after a FAT. Test system can also be configured by the system integrator or by the User. In this case, the same coverage is required, but the test phasing and location may be different.

The User and Supplier should work together to develop an overall approach to testing that reflects the risk assessment output and ensures adequate test coverage of the functionality, whilst avoiding unnecessary repeated tests.

Factory Acceptance Test – FAT

Done at the Supplier's premises after Supplier's integration testing, and before the system is released for delivery to the User's site

The required coverage should reflect the relative risk priority associated with the system element under test. This coverage can increase if problem are found within the initial sample.

In determining the required coverage, the User needs to base decisions on the risk assessment output taking into account both the potential effect on product quality and safety resulting from the process, and the intrinsic risk likelihood associated with the method of implementation.

Before performing risk assessment to decide on the required coverage, the User should review the supplier's internal tests to confirm that they are adequately documented.

Site Acceptance Test – SAT / Installation Qualification – IQ

Done at the User's premises after installation of the system on site

:The required coverage should include

Checking that the full system, including HW, SW backups, and documentation has been delivered to site in a condition suitable to its intended purpose

Checking that the site environment suits the specification of the installed equipment: temperature, humidity, pressure, dust, vibration, interference, etc

Checking that the system has been correctly installed.

Demonstration the system still operates as it was when accepted during the FAT, typically by re-recording system components and versions, and by re-repeating a small sample of FATs on site

Testing any elements which could be adequately tested in the FAT environment, typically interfaces to other equipment, networks, etc

Re-testing, following remedial action on any element subject to conditional release at the end of the FAT

Operation Qualification – OQ and Performance Qualification – PQ

Done at the User's premises after SAT/IQ

If a system has been fully tested in the FAT, after successful completion of the IQ (along with any additional field functional tests and calibrations), the system is treated as an integrated part of the process equipment qualification. This should ensure that the full system, procedures and trained personnel are ready for production.

End of GAMP – Test Example – part 3.

 

ולידציה – GAMP – Test Example – part 2

Good Automated Manufacturing Practice (GAMP) – Test Example

Testing Process Automation Systems

This article cover  the second part of  our  Good Automated Manufacturing Practice (GAMP) test example. This part will cover  the typical test phases for supplier's application SW Module Testing, supplier's Module Integration Testing and the supplier's Integration Testing – cont'd

                     Typical Test Phases

 Typical test phases for a complex process automation system. This example assumes that the system is configured by a Supplier and delivered to site after a FAT. Test system can also be configured by the system integrator or by the User. In this case, the same coverage is required, but the test phasing and location may be different.

The User and Supplier should work together to develop an overall approach to testing that reflects the risk assessment output and ensures adequate test coverage of the functionality, whilst avoiding unnecessary repeated tests.

: Typical Test Phases for Process Automation Systems

 Supplier's Application SW Module Testing

Done at the Supplier's premises after the module has been placed under configuration control and code reviewed. Before system integration. Module testing generally covers: Module data handling

 Interfaces to other modules

 Operator interfaces

 Module functionality Failure paths and response to fault conditions should be included within the tests

Supplier's Module Integration Testing

Done at the Supplier's premises.After individual module have been tested and integrated into a single unit.Before full system integration

Module integration testing generally covers: Correct operation of Interfaces between modules

Failure paths and response to fault conditions should be included within the tests

Supplier's Integration Testing –

Done at the Supplier's premises.After module testing, and before the User is invited to witness the FAT

Integration testing generally covers:

 HW

I/O interfaces

Operator interfaces

Interfaces to other equipment

System functionality

Data handling functions

Failure paths and response to fault conditions should be included within the tests

:HW tests typically include

·      Checking system build against approved HW specifications and drawings

·      Recording system components, version numbers (including SW versions) and capacities

·      Checking electrical supplies and grounding

Supplier's Integration Testing (cont'd) –

Checking correct power up of system components

Checking any self test/diagnostic information

Checking correct communication on standard interfaces

:I/O interface tests typically include

Exercising inputs and outputs to check correct configuration of ranges, alarms, etc.

:Operator interface tests typically include

System displays and navigation

Security and access controls

:Tests for interfaces to other equipment typically include

Checks of communications protocol setup

Checks that the required data can be transferred

Checks of actions in case of communications failure

Tests for system functionality typically include:

Monitoring functions

Alarm strategies

Control functions)control modules, equipment modules, procedural control(

Power failure and recovery

Component failure and redundancy

Performance checks

:Tests for system data handling typically include

Operator data entry

Data formatting and quality checks

Checks of calculated values

Checks of recipes

Checks of access to current process data, alarms and events )displays, alarm summaries, etc.(

Checks of access to historical process data, alarms and events )trends, reports, alarm histories, etc.(

Checks of audit trail functionality

Checks of data capacity and retention times

Checks of archive and restore

Checks of provisions for electronic signatures

Checks of disaster recovery procedures

End of  ולידציה – GAMP – Test Example – part 2

 

 

 

 

ולידציה – GAMP – Test Example – part 1

Good Automated Manufacturing Practice (GAMP) – Test Example

Testing Process Automation Systems

This article cover  the first part of  our  Good Automated Manufacturing Practice (GAMP) test example.

                                     Definitions

This section provides brief descriptions of three different types of process control systems.

                  Configurable Equipment

Configurable Equipment is the collective name given to simple configurable instruments/ devices, such as 3-term controllers, check scales, bar code readers, etc. Their functionality depends on that their configuration setup meets the process requirements. The software (SW) components of these systems are typically defined as GAMP SW Category 2.

                    Embedded Systems

Embedded Systems is the collective name for systems with a greater degree of configuration and programmability. Devices such as Integrated Circuits (ICs) with configuration setups and Programmable Logic Controllers (PLCs), which are supplied as an integral part to an item of process equipment, e.g., PLCs controlling a centrifuge or packaging machine, or IC embedded in High Performance Liquid Chromatography (HPLC) systems. Embedded Systems typically contain SW components belonging to multiple GAMP categories.

                 Stand-Alone Systems

Stand-Alone Systems is the collective name for large programmable control systems having distributed functionality across a network, e.g., Distributed Control Systems (DCSs), and Supervisory Control and Data Acquisition (SCADA). They are engineered as an entity to control a complete plant. Stand-Alone Systems typically contain SW components belonging to multiple GAMP categories.

                      Testing and the GAMP Life Cycle 

     Stand-Alone Systems

A process automation system developed for a new application typically requires some or all of the following test phases:

Suppliers Module Testing

Suppliers Module Integration Testing

Suppliers Integration Testing

Factory Acceptance Test (FAT)

Site Acceptance Test (SAT)

Installation Qualification (IQ)

Operation Qualification (OQ)

Performance Qualification (PQ)

The exact combination of testing required for a particular system should reflect its complexity, the maturity of its underlying SW and hardware (HW) elements, and the risk impact on product quality, patient safety and data integrity. Collectively these will determine the risk priority. The phrase 'low risk' should be understood as 'having a low risk priority, as determined by a formal risk assessment'.

Testing of modifications, patches or upgrades should be related to the risk priority of the change. For example, it may be appropriate for parameter changes to be applied directly to the production environment, assuming that the system have been range checked for such parameter.

End of ולידציה – GAMP – Test Example – part 1

ולידציה – GAMP – Test Environments – Test Data Sets

ולידציה  Good Automated Manufacturing Practice – GAMP – Test Environments – Test Data Sets

Test data sets are often use where the test environment does no permit the use of real data for reasons of availability or confidentiality, or where the real data are not generic enough to cover certain test types (e.g., challenge testing at boundary conditions or stress testing).

                          Representative Test Environment

Test data should represent as close as possible the actual data to be operated on, in terms of volume and range of possible values (including invalid entries, to check that they can be correctly handled).

Differences between the proposed test data and the expected actual data should be detailed on the Test Specification or Protocol, and subject to impact assessment. If necessary, additional tests should be planned for the production environment in order to cover identified risk scenarios.

                         Control of Test Environment

Test data sets should be placed under configuration management and the version in use recorded.

For automatically generated data it may also be appropriate to control the utility used for generating the data, as well as the test data set

                         Removal from Production Environment

If the test data SW is added in the way that it may appear in the production environment, then, this should be documented a temporary modification to the production system. Removal of the temporary modification should be documented as well.

If the production environment includes automatic audit trailing, then it should be recognized that all audit trail entries from the testing process will remain.

                              Test User Accounts

Test user accounts are often used to permit testers to access the system at different levels, and ensure that activities carried out during testing are easily identified within any resulting audit trail.

                          Representative Test Environment

Where test user accounts are often used, these should be set up to represent each group of users within the system, including the corresponding authorizations. For multi-lingual system, test user accounts using foreign character sets should be included. Similarly, if existing individual accounts are used for testing, representatives from each group should be included.

                         Control of Test Environment

If test user accounts are used, then the setup of the accounts should be retained as part of the test documentation. Where there are issues of data confidentiality, controls should be exercised to ensure that the use of test accounts does not cause breaches of confidentiality.

                        Removal from Production Environment

If the test user accounts are added in the way that it may appear in the production environment, then, this should be documented a temporary modification to the production system. Removal of the temporary modification should be documented as well.

                            Test Documentation

The test environment includes documentation used during testing. This should always include the test documentation (Test Plans and Strategies, Protocols and Test Specifications, Test Cases and Test Scripts) and the controlling Design Specifications. It may also include operating procedures such as SOPs.

The test documentation should be controlled and recorded to a level of detail that would allow it to be retrieved as part of later review of the test results. This control would, at minimum, include the recording of current document version levels.

ולידציה – GAMP – Hardware & Software Test Environments

ולידציה –  GAMP – Hardware & Software  Test Environments

Good Automated Manufacturing Practice (GAMP) Hardware (HW) Environment Test

HW can be categorized according to both GAMP 4 HW category (standard or custom) and its function within the test environment. It can be one o f these three things

 Part of the system under test, i.e., part of the production HW

 Test HW representing part of the production environment, which may be needed because it is not feasible to include a certain element of the production system in the test environment

A separate test system, which may be used to represent an external system

          Representative Test Environment

As started previously, the HW environment should represent as close as possible the production environment

For example, if the test environment uses a standard network hub of the same type as the production environment, then the substitution probable introduces low probability if invalid test results in the production environment. If, however, the network cabling in the test environment is uses short patch cables, whilst the real environment has cable runs close to the maximum recommended length, there is clearly a possibility of different network behavior, and additional tests on site may be needed to prove proper network performance

                    Control of Test Environment

For standard HW (per GAMP 4 HW Category 1), the manufacturer's reference and serial numbers should be recorded.

For custom HW (per GAMP 4 HW Category 2), the version of the item and its controlling specification should also be recorded.

For all test HW, any applicable calibration status should be recorded in the context of the specific

                         Removal from Production Environment

If the test HW is added in the way that it may appear in the production environment, then, this should be documented a temporary modification to the production system. Removal of the temporary modification should be documented as well

Good Automated Manufacturing Practice (GAMP)  Software (SW) Environment Test

Test SW can also be categorized according to both GAMP 4 SW category and its function within the test environment:

Part of the system under test, i.e., part of the production HW

Test SW representing part of the production environment

A separate test system

              Representative Test Environment

The SW environment should represent as close as possible the production environment

For example, if the test environment uses a process controller of the same type as the production environment, then the substitution probable introduces low probability if invalid test results in the production environment. If, however, a particular interface is simulated in the test environment, a possibility remains that different timing factors or process dynamics could affect the operation in the production environment, and additional tests on the production interface may be appropriate

                        Control of Test Environment

For standard SW (per GAMP 4 SW Category 1, 2 or 3), the manufacturer's reference and version numbers (including installed patch cables) should be recorded. Any configuration or setup parameters should be controlled

For configured or custom SW (per GAMP 4 SW Category 4 or 5), the item should be placed under configuration management and the version in use recorded

              Removal from Production Environment

If the test SW is added in the way that it may appear in the production environment, then, this should be documented a temporary modification to the production system. Removal of the temporary modification should be documented as well

ולידציה –  GAMP – Hardware & Software  Test Environments

ולידציה – Introduction to GAMP Test Environments

ולידציה – Introduction to Good Automated Manufacturing Practice (GAMP) Test Environments

The environment in which testing is conducted should be determined based on the life cycle output of the risk assessment. The following general principles apply:

  • The proposed test environment should represent as close as possible the production environment. The differences between the two should be detailed in the Test Specification or Protocol, and should be subject to impact assessment.
  • The test environment should be controlled and recorded to a level of detail that would allow it to be reconstructed if necessary. Such control includes:
  • System hardware (HW) and software (SW) components
  • Test HW (versions, serial numbers, as appropriate)
  • Test SW (version control of any simulations)
  • Test data (version control of any test data sets, test recipes, etc.)
  • Test user accounts
  • Where test HW/SW/data/user accounts are applied as they may appear in the production environment, controls should be available to ensure that they can either be removed cleanly or isolated from use (either logically or in time).
  • Where the test environment is required to undergo a temporary change to facilitate the execution of specific tests, both the change and its removal must be formally documented.

 GAMP Test Environments

In many circumstances it is undesirable to conduct all testing on the final production environment. Common examples include:

  • Non-availability of infrastructure at the point in the project life cycle when testing is carried out.
  • Non-availability of certain interfaces.
  • Requirement to test changes outside of the production environment prior to installation.
  • Requirement to carry out tests which may be destructive to the production environment.

 

The progression of SW build from a development environment through production environment depends on the risk priority associated with the system being installed or the change being made, and on factors such as the ease of possible modification removal from the system.

A change to a custom data processing module in a large business system may require progression from a development environment to a test environment, to a validation environment, and then to the production environment. This may be required because the change is a high risk priority, and, even if the if the original module could be restored easily, the test data may remain in the production environment.

Some tests, e.g., Performance Qualification (PQ), or part of it, may need to be conducted in the production environment.