Smartlogic

Test Incidents – Analysis, Logging & Classification

                   GAMP – Test Incidents – Analysis, Logging & Classification

Incident Analysis and Logging

אילן שעיה ilan Shaya
Ilan Shaya CEO , control and automation specialist and designer

This article was written by Ilan Shaya a world specialist in validation, automation & control

When a test incident occurs during a particular step, the overall test should not be continued if the failure produces an output tat that prevents entry into a subsequent step. When a test continues after following a failure, the failed step should be clearly identified on the test results sheet

It is important to fully record the details of all new test incidents and maintain an index of these incidents

Test incidents may be fed either into an existing change control system or into a separate process for resolving test incidents. An example of an incident report (summarizing details of the incident, proposed solution and retest requirements, review, implementation and closure) is given in GAMP 4, Appendix D6

                    Test Incident Classification

In addition to correcting an identified fault, it is important to evaluate test incidents in order to determine their most likely cause. An important part of any Corrective & Preventive Action (CAPA) process is intended to address this issue. Metrics on the causes of avoidable test incidents provide a useful indicator of areas within the overall SW development life cycle that may benefit from improvement activities, to reduce the likelihood of recurrence.

Typical test incident types that occur in SW testing include, but are not limited, to those described below

              Incorrect SW Installation

Errors such as program dumps, abnormal terminations, or inability to access applications, result often from a failure in the installation or configuration process, or the installation of a wrong SW version

When any of these errors is determined to be the cause of the incident, it is usually necessary to postpone any further testing until the test environment is correctly set up

              Incorrect Programming/Coding

Incidents may result in actual system outputs failing to agree with required system outputs. These incidents should be noted, and, unless the defects are considered sufficiently important to invalidate the rest of the test steps, the execution of the test case can continue

Once the cause of a defect is identified, the defect should be corrected and the corrected SW included in a subsequent SW build for retesting

               Incorrect Test Data

Testing failures may occur as a result of failure to create correct data in the test database, in advance of the test case being run

               Inadequate Specification- Incorrect Understanding of Program Functionality

Testing failures may occur as a direct result of the fact that the controlling Design Specification does not state clearly enough what is required from a particular piece of functionality. This may be particularly evident when a custom system is developed to satisfy a new business process that may not be fully established yet

               Poorly Specified Test Case/Script

Tests can fail if the Test Case or Test Script (or other relevant document) produced is incorrect and indicates an outcome different than documented in the corresponding requirements

When a Test Case or Test Script has been modified during execution, a test incident should be raised to manage changes to the Test Case or Test Script, and to confirm the pass/fail status of the test

Incidence of this type of error should be minimized by ensuring independent review of the test case before approval, including a cross check of the expected output as specified in the controlling requirement.

                Incorrect Design Solution

Test errors can arise where the SW may work correctly against design, but the design implemented does not satisfy the original stated requirements, or fail to reflect any subsequently agreed change requests

               Inconsistent Controlling Design Specification

Test incidents can occur where the content of the relevant controlling Design Specification contains inconsistencies. It is, therefore essential that this specification is corrected to prevent further confusion

This incident type should not be confused with Incorrect Programming/Coding, where the SW coded does not match a particular requirement of the controlling Design Specification, and the code needs to be corrected

The controlling Design Specification inconsistencies should be logged in the incident management system so the specification can be corrected following the appropriate document management system

           Unexpected Test Events

During execution of a Test Case or Test Script, the tester may notice an anomaly in the SW that, although not affecting the success of the overall test objective, nevertheless requires further investigation. This event should be recorded in the incident management system in order that the controlling Design Specification, and the corresponding Test Case or Test Script can be updated to reflect the presence of the anomaly

             Test Execution Errors

Test can be classified as failures if the tester fails to follow the steps outlined in the Test Script, or the overall Test Protocol or Test Specification governing that activity

Missing signatures and timestamps for dates, and other important cross reference information is another area that could cause a test to be considered a failure

               Force Majeure

Test incidents of this nature reflect an unexpected event over which the test or project team have no control, and which brings testing to a premature halt. These events can be raised as issues by the project team, but are generally resolved outside of the project

ולידציה – GAMP – Definition of Terms

Good Automated Manufacturing Practice (GAMP) Definition of Terms

Definition of Terms Used in Testing Environments

This document provides a definition of a set of testing terms used within pharmaceutical and other life sciences (consistent with those used in GAMP 4), Information Technology (IT) industries, and control and automation industries, in order to facilitate understanding testing environments.

It is recommended that a definition of consistent testing terms should be prepared on an organizational or project basis, where members of User and Supplier organizations work together. It is helpful to agree on these testing terms definition prior to contract signing, to ensure that contractual issues are based on common understanding of activities and milestones.

:The definition of terms listed below is based on three sources

GAMP 4 Definition from the GAMP Guide for the Validation of Automated Systems

IEEE Definition from IEEE 100, the Authoritative Dictionary of IEEE Standard Terms

BCS Definition from Working Draft: Glossary of terms used in software testing, version 6.2, produced by the British Computer Society Interest Group in Software Testing – BCS SIGIST

Terms and Definitions

Terminology Definition Source
Acceptance Criteria Criteria that a system or component must satisfy in order to be accepted by the User, customer or other authorized entity  GAMP 4 , IEEE
Acceptance Test

Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, and to enable the customer to determine whether or not to accept the system

See also Factory Acceptance Test (FAT) and
Site Acceptance Test SAT

 ,GAMP 4 IEEE
Black Box Testing See Functional Testing IEEE
Boundary Condition Testing Testing for correct operation when one or more variables are at a limiting value or a value at the edge of the domain of interest IEEE
Calibration Set of operation that establish, under specified conditions, the relationship between values indicated by a measuring instrument or system, or values represented by a material measure or a reference material, and the corresponding values of a quantity realized by a reference standard  GAMP 4,   ISO 10012

 

 

Terminology Definition Source
Challenge Testing Testing to check system behavior under abnormal conditions. Can include stress testing and deliberate challenges, e.g., to the security access system, data formatting rules, possible combinations of operator's actions, etc
Commissioning Process of providing to the appropriate components the information necessary for the designed communication between them IEEE
Emulation A model that accepts the same inputs and produces the same outputs as the given system IEEE
Environmental Testing Testing that evaluates system or component performance up to the specified limits of environmental parameters, e.g., temperature, humidity or pressure
Firmware FW Combination of hardware (HW) device, computer instructions and data that reside as read-only software (SW) on that device IEEE
Factory Acceptance Test FAT

Acceptance Test in the Supplier's Factory, usually involving the customer

See also Factory Acceptance Contrast to Site Acceptance Test – SAT

GAMP 4 , IEEE
Functional Testing Testing that ignores the internal mechanism of a system or component, and focuses solely on the outputs generated in response to input and execution conditions
Also known as Black Box Testing
GAMP 4 , IEEE
Hardware- HW

Physical equipment used to process, store, or transmit computer programs or data

Physical equipment used in data processing, as opposed to programs, procedures, rules, and associated documentation

IEEE
HW Testing Testing carried out to verify the correct operation of system HW independent of any custom application SW IEEE
Installation Qualification – IQ Documented verification that a system is installed according to written and pre-approved specifications  GAMP 4 , IEEE
Integration

Process of combining SW components, HW components, or both, into an overall system

Sometimes described as SW Integration and System Integration, respectively

IEEE

 

Terminology Definition Source
Integration Testing

(1) Testing in which SW components, HW components, or both, are combined and tested to evaluate the integration between them

(2) Orderly progression of testing of incremental pieces of the SW program, in which SW elements, HW elements, or both, are combined and tested until the entire system has been integrated to show compliance with the program designed, and the system capabilities and requirements

IEEE
Instance Single installation of a SW application (plus associated databases, tools and utilities). Usually applied to configurable IT systems
Load Testing Stress testing conducted to evaluate a system or component up to the limits of its specified requirements
Loop Testing Testing in which control system inputs and outputs are exercised and their functionality verified
Market Requirements Specification Statement of generic industry requirements used by the Supplier as an input to its product development life cycle IEEE
Middleware Combination of HW, computer instructions and data that provides infrastructure used by other system modules IEEE
Module Testing Testing of individual HW or SW components, or groups of related components IEEE
Negative Testing Testing aimed at showing that the SW does not work BCS
Operational and Support Testing

(1) Testing conducted to evaluate a system or component in an operational environment

(2) All testing required to verify system operation in accordance with design after the major component is energized or operated

IEEE
Operation Qualification- OQ Documented verification that a system operates according to written and pre-approved specifications throughout all specified operating range GAMP 4 , IEEE
Performance Qualification – PQ Documented verification that a system is capable of performing or controlling the activities of the processes it is required to perform or control, according to written and pre-approved specifications, whilst operating in its specified environment GAMP 4 , IEEE
Positive Testing Testing aimed at showing that the SW does meet the defined requirements BCS

 

 

Terminology Definition Source
Qualification Process to demonstrate the ability to fulfill specified requirements

GAMP 4

  ISO

Simulation A model that behave or operates like a given system when provided a set of given inputs IEEE
Site Acceptance Test – SAT

Acceptance Test at the customer's Site, usually involving the customer

See also Factory Acceptance Contrast to Site Acceptance Test – SAT

GAMP 4,

IEEE

Software – SW Computer programs, procedures, and associated documentation and data pertaining to the operation of a computer system IEEE
Stress Testing Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements IEEE
Structural Testing Examining the internal structure of the source code. Includes the low-level and high-level code review, path analysis, auditing of programming procedures, and standards actually used, inspection for extraneous "dead code", boundary analysis and other techniques. Requires specific computer science and programming expertise
Also known as White Box Testing
GAMP 4
Supplier Organization or person that provides a product   GAMP 4,  ISO
System Testing Testing conducted on a complete, integrated system to evaluate its compliance with its specified requirements GAMP 4
Test

 Procedure in which by executes an activity under specified conditions, the results are observed and recorded, and an evaluation is made of some aspect of that system or component

 Determination of one or more characteristics according to a procedure

,GAMP 4

ISO

GAMP 4

 IEEE

Test Case Set of test inputs, execution conditions and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement

GAMP 4 ,

 IEEE

Test Plan Document describing the scope, approach, resources and schedule of intended activities. It identifies test items, features to be tested, testing tasks, personnel en charged of each task, and risks requiring contingency planning GAMP 4

 

Terminology Definition Source
Test Procedure Detailed instructions for the setup, execution and evaluation of results for a given test case GAMP 4,   IEEE
Test Protocol See Test Specification IEEE
Test Script Documentation that specifies a sequence of actions for executing a given test GAMP 4 , IEEE
Test Specification Document describing the scope, management, use of procedures, sequencing, test environment, and prerequisites for a specific phase of testing
Test Strategy See Test Plan
Unit Testing Testing of individual HW or SW units or groups of related units IEEE
Usability Testing Testing the ease with which Users can learn and use a product BCS
User Person or persons who operate or interact directly with the system GAMP 4
Validation Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes GAMP 4 , FDA
Verification Confirmation, through the provision of objective evidence, that specified requirements have been fulfilled GAMP 4 ,    ISO
White Box Testing See Structural Testing IEEE

ולידציה – 3 Validation case study – part

ולידציה – 3 Validation case study – part

 This article was written by Iian Shaya, validation,automation and control expert

אילן שעיה Ilan Shaya
Ilan Shaya CEO , control and automation specialist and designer

Documentation for IQ and OQ – to be checked at PDI/FAT

Welding reports

Surfaces finishing test reports

PDI and FAT results

As-built drawings, 3 sets in nominated project language, plus 1 set in English

As-installed versions of all documentation submitted for design review

Back-up software on diskette/CD-ROM, as appropriate, ready for re-installment

Machine configuration/start up, set-up and commissioning data, including tabulation of all change parts and identifications

Full machine parts list

Complete documentation (protocols and method statements) required for equipment DQ, calibration, IQ and OQ specified for manual and automatic operations

Calibration certificates for all required instruments to NIST

Specification for all parts manufactured by sub-contractors

Full identification of all parts according to the P&ID, including valves, regulators, instruments, pipes, media and flow direction arrows

Tags for electrical and pneumatic wiring

Documentation to ensure qualification in compliance with FDA and EMEA, as outlined above

DQ Protocols Including PC/PLC

Approval

Statement of purpose

System description

Traceability matrix

IQ Protocols Including PC/PLC

Approval

Statement of purpose

System description

Specifications

Materials in product contact

Engineering drawings

Subsystem inspection

Components

Piping

Valves

Instrumentation and calibrations

PC/PLC requirement definition

Software development documentation

Manual / technical literature

Test equipment data sheet

Component data sheets

Utility requirements

Exceptional conditions, if required

Summary

OQ Protocols Including PC/PLC

Statement of purpose

System description

Manual and automatic control over all modules through HMI

PC/PLC validation protocols

Step-by-step checking of schedule of system operation – SSO

Alarm and message reaction

HMI synoptic screen vs. P&ID

System operation tests

Operation tests for HMI to ensure compliance with 21 CFR part 11

Application software certification

HW documentation

SW code

SW components data sheets

HW components data sheets

PLC configuration

Graph printout

Synoptic screen list and printout

Operation screen list and printout

Parameters list screen

Messages and alarms list, and printout

HW inspections

SW inspections and application

Approved schematic description

Ladder diagram validation

PLC capabilities

PLC accuracy

SW development documentation

List of control devices

Exceptional conditions

Reports – verification of authorization inspection

PQ Protocols Including PC/PLC

Statement of purpose

Analysis procedures

Staff instruction

Plan for sampling

Criteria for acceptance

 This article was written by Iian Shaya, validation,automation and control expert

ולידציה – Validation case study – part 2

ולידציה – Validation case study – part 2

 This article was written by Iian Shaya, validation,automation and control expert

Validation Requirements

Documentation for Initial Tender

Project schedule and milestones design and construction detail

Project quality plan

Supplier’s local subsidiary/agent

Supplier’s documentation that the system / configurable software versions are released to the market and are FDA/EMEA compliant

Compliance to the 21 CFR Part 11 operational requirements – the contractor should ensure covering of all the requirements described below

Contractor to state the system/product status and implementation planned for each requirement – User's approval pre-delivery

Documentation for Design Review

PID for the system

Electrical and pneumatic schematics

:Installation data

General arrangement drawings

Floor loading

Utility requirements

Details of electronic records and approvals that may be subject to regulatory controls under CFR Title 21 part 11

Instrumentation documentation

:Main components specification

Equipment

Instrumentation

Valves

Piping

Control system

Pipe welding documentation

General installation book

Procedures

User guide

Security

Preventative maintenance + spare parts list

Operation procedure

Pressure test procedure

Leak test procedure

Passivation procedure

Calibration

:Functional Design Specifications for

Complete manufacturing system

Software – SW and hardware –HW

Functional Design Specifications

System Detail Design Specifications for HW and SW.

SW source code, with comments, for customized SW

SW complete version history

HMI alarm list, message list and graphical displays

I/O list

:List of materials

Product contact materials

Potential product contact lubricants

Welding procedures for product direct and indirect contact parts

Pre-delivery Inspection and Factory Acceptance Test (FAT) protocols

Steel certificates and gasket certificates

Documentation Prior to FAT or Pre-Delivery Inspection – PDI

:Progress visit report which will include

Mechanical and technical development

Automation and SW development

:Supplier’s factory test results for

Unit tests – The test protocols shall be traced to low level design document and will be approved by the user prior to execution. The approval of the report shall be performed by representatives of the user's validation team, IT QA.

Automation and SW development

:Integration Tests  – Simulation Testing

Approved PDI and FAT protocols

Commissioning

MCCR

Purpose

Scope

Responsibility

Execution instructions

System scope

System description

Drawing verification

Equipment verification

Valve verification

Instrument installation and calibration

Utility verification

Documentations verification

Piping Verification

Sample Point Verification

Safety, health and environment verification

Slopes verification (if relevant)

Electrical and communication activities

Pump installation verification, if relevant

Heat exchanger installation verification, if relevant

Air break verification, if relevant

Dead leg verification, if relevant

CE

Purpose

Scope

Responsibility

Execution instructions

System scope

System description

System startup

Equipment verification

Main equipment operation checks

System FDS (SSO) verification

System performance testing

 This article was written by Iian Shaya, validation,automation and control expert

ולדיציה – Validation case study – part 1

ולדיציה -1 Validation Requirements – case study- part

 This article was written by Iian Shaya, validation,automation and control expert

Validation Requirements is a document which may be part of the validation documentation that describes the validation strategy for a system or subsystem. This document is generic; the system or subsystem may include a PC with Human/Machine Interface (HMI), a Programmable Logic Controller (PLC), virtual hardware (HW), software (SW), and other components designed to maintain the user's facility in proper conditions specified by the user.

Validation Requirements Document Contents

This document is structured in a relatively standard fashion, with predetermined chapters and sections, where the final contents are tailored according to the type and size of the system under validation.

The main chapters and sections of a Validation Requirements document are:

Responsibility

Validation Requirements

Documentation for Initial Tender

Documentation for Design Review

Documentation Prior to Factory Acceptance Test (FAT) or Pre-Delivery Inspection – PDI

Commissioning

Documentation For Installation and Operational Qualification (IQ and OQ) – to be checked at PDI/FAT

Design Qualification (DQ) Protocols (Including PC/PLC) Architecture

Installation Qualification (IQ) Protocols (Including PC/PLC

Operational Qualification (OQ) Protocols (Including PC/PLC

Performance Qualification (PQ) Protocols

Computerized System Validation

Responsibility

This section lists the responsibilities of contractor, the user, and the required contents of the documents composing the validation file

The contractor is responsible for creation and performing the DQ, Design Review, Commissioning, IQ and OQ validation protocols

The user is responsible for creation and performing the PQ validation protocols

Design Qualification (DQ) – The design of the system will be documented and checked in the Design Specification. This specification will include details of the system and must be traceable to the URS and BOD documents

Mechanical Completion Check Report (MCCR) of the system will be documented and checked only by the contractor. This document will check system readiness for the IQ

Commissioning Execution (CE) of the system will be documented and checked only by the contractor. This document will check system readiness for  OQ

Installation Qualification –IQ

IQ will establish documented evidence that the system is installed according to the manufacturers’ specifications and user requirements, and assure that the environment is appropriate for its intended purpose

Each IQ protocol will include an appendix of deviation report, which describes the deviations (if they existent) of the specific system, and the contractor will be responsible to correct them

Operational Qualification – OQ

OQ will establish documented evidence that the system is operated according to manufacturers’ specifications and user requirements, and assure that the environment is appropriate for its intended purpose

OQ will establish documented evidence that the system is operated according to manufacturers’ specifications

.Each OQ protocol will include an appendix of deviation report, which describes the deviations (if they existent) of the specific system, and the contractor will be responsible to correct them

Performance Qualification- PQ

PQ will establish documented evidence that the system performs according to manufacturers’ specifications and user requirements, and assure that the environment is appropriate for its intended purpose

The PQ protocols are user's responsibility

 This article was written by Iian Shaya, validation,automation and control expert

ולידציה – Function Design Specification – FDS

ולידציה – Function Design Specification – FDS

 This article was written by Iian Shaya, validation,automation and control expert

The Function Design Specification (FDS) is part of the validation documentation that details the solution to be provided to meet the user's requirements. It should be approved by the user and should form the basis of the design for both hardware (HW) and software (SW) designs.

The FDS provides the basis of the design of the system and is used to verify and validate the system during the testing, ensuring all the required functions are present and that they operate correctly. It details all the functions, operator interactions control and sequencing associated with the system, thus allowing the user to confirm, before the system is developed that the proposed solution fully meets its requirements.

FDS Contents

The FDS is structured in a relatively standard fashion, with predetermined chapters and sections, where the final contents are tailored according to the type and size of the system under validation. The FDS presented here includes only to the technical contents. It does not include commercial and contractual requirements, which may also be generally included.

The main chapters and sections of an FDS protocol are:

Relationship to Other Documents – lists all documentation used in the production of the FDS. Includes suppliers' documents (such as URS) and drawings. Each document listed should include the document/drawing number and version number. This allows traceability as documents are updated throughout the project life cycle on any impact on the FDS.

System Overview

Process Overview – includes a description of the process being controlled; this may be taken from the URS, enhancing to detail the interaction with the control system.

Control System Overview – includes detailed control system description, with all the components and interaction between the systems; block and network diagrams can be used to show in detail the system architecture

Scope and Limits of Supply

Scope of Supply – includes a list of deliverables, panels, computers, software, etc

Limits of Supply – includes all items outside the scope of the supply required by the project; where interfacing to 3rd party systems, constraints and assumptions should be included

System Functions and Facilities

Operation Modes – includes all modes of operation for the system

Functional Operation – divides each of the sequences functions into logical areas (determined by the process), and provide complete description of each area

Operator Requirements – describes the interface between the operator and the detailed function

Human/Machine Interface- HMI – details all points of operation, local terminals, remote terminal, message displays, push button stations, etc.

Report Outputs – the format of all reports generated by the system should be detailed, and that the format and explanation of the report contents should be included

System Data – all data gathered, generated or calculated by the system should be detailed

System Interfaces – provide complete details of all inputs and outputs from the control system

System Attributes

Availability – defines expected "working" time of the system between failures

Maintainability – details issues related to maintainability of the plant, in particular for systems that require regular maintenance to ensure the reliable operation

Transport and off loading

Power and services required

Connections to existing/3rd party systems

Changes to existing plant or hardware (HW) equipment

Changes to existing software (SW) systems

Training – details the formal and informal training to be supplied under the contract

Design Factors – details special factors relating to the design of the system, standards and methodologies to be followed for both the HW and SW development

Development Factors

Project Control – includes or makes reference to project plans and timescales, along with details of quality requirements, standards, test and integration and configuration management

Resource Requirements – includes the basic project team provided by the supplier, the access required to the customer's premises, and input and timing required by the customer into the project

Test procedures – including details of all test documentation and responsibilities for testing both offline and online

Module and Integration Testing

Factory Acceptance Testing (FAT) – performed at the suppliers premises

Site Acceptance Testing (SAT) – performed on completion of commissioning to demonstrate pre-handover system operation

:Note

As the final contents of the FDS are tailored according to the type and size of the system under validation, and this document is generic, it covers test procedures that may not be necessary in small or simple systems. The following sections cover the FDS issues that require further details

 About the system functions and facilities in our next article FSD System Function & Facilities

 This article was written by Iian Shaya, validation,automation and control expert

אילן שעיה ilan Shaya

ולידציה – URS Contents

ולידציה – URS Contents

 This article was written by Iian Shaya, validation,automation and control expert

A URS usually presents the user's requirements for installing and operating a system designed to monitor and control specified conditions at its facility

:The user's requirements may be divided into 4 categories

Installation Requirements

Operation Requirements

Regulation Requirements

HMI Requirements

Installation Requirements

These requirements are intended to cover all the issues regarding system installation to ensure its proper functionality and reliability. Examples of this type of requirements are:

List of required hardware (HW) components, such as system PC, Programmable Logic Controller* (PLC), and varied environmental conditions sensors and control devices

Labeling and identification requirements for each HW component

Requirements for the software (SW) programs installed on the system PC

Storage capacity

Required connections to various types of sensors, communication units, temperature, humidity and pressure transmitters, illumination devices, etc

Communication compatibility with equipment already installed at the user's facility without extra sensors

Operation Requirements

These requirements cover all the operations that the system must be capable of performing. Examples of this type of requirements are

Environmental conditions (such as pressure, temperature and humidity) to be monitored and controlled

Type of systems to be monitored and controlled, such as Heating, Ventilation and Air Conditioning (HVAC) system, types of sensors, etc

Computerized system capabilities and starting conditions

System capabilities to recover from failures

Internal tests to be performed regularly, and alarm indications to be issued in case of failure

Provision of current and historical alarms regarding all parameters in any case of deviation from the limits specified in the system

System real-time screens display capabilities

Provision of the following data and HMI displays

Synoptic screens for displaying online values and status

Data logging and storage of historical trends, events and alarms

Tabular screens for displaying events and alarms

Graphical screens for displaying trends

Display of the following information for each alarm

Status – new/acknowledged alarm

Time at which the alarm was activated

Parameter/Tag/Name of the module that activated the alarm

Alarm Description

Alarm Priority

Display of alarms to warn the user, collect alarm history, and enable the user to view current and historical alarms. The system alarms shall include

Component malfunction/failure

Irregularity in parameter reading – such as disconnection of communication lines

Parameters values exceeding the high/low parameter limits

Deviations of system operation from predefined parameters/operations

Capability of configuring the graphs parameters according to

Date and time

Measured parameters

Predefined number of displayed parameters

Trend graphs with maximum and minimum allowed limits of the monitored parameters

Logging interval defined by the user and configured by the supplier

Capability of authorized user's personnel to define low and high limits and delay time for each alarm parameter

  .On URS regulatory & HMI Requirements you can find out in our this link: URS – Regulatory & HMI Requirements

*Here are some examples of the PLCs used by smartlogic: 6XV1830-0EH10, 6ES7131-4BF00-0AA0,6ES7193-4CA40-0AA0,6ES7134-4GD00-0AB0,6ES7193-4CA40-0AA0, 6ES7138-4CA01-0AA0,6ES7193-4CC20-0AA0, 6ES7590-1AB60-0AA0, 6ES7511-1AK00-0AB0, 6ES7954-8LP01-0AA0,6ES7155-6AU00-0BN0

 This article was written by Iian Shaya, validation,automation and control expert

אוטומציה ובקרה – Create the Generic PLC Model

אוטומציה ובקרה –  Create the Generic PLC Model

PLC is a digital computer used for automation of elector mechanical processes, such as control of machinery on factory assembly lines.

PLCs are usually programmed using application software (SW) on personal computers (PCs). The PC is connected to the PLC through Ethernet, RS-232, RS-485 or RS-422 cabling. Most PLCs used by smartlogic when designing an automation and control systems are the Siemens PLCs and Allen Bradley‘s.The programming SW allows entry and editing of the ladder-style logic. Generally the SW provides functions for debugging and troubleshooting the PLC software, for example, by highlighting portions of the logic to show current status during operation or via simulation. The SW will upload and download the PLC program, for backup and restoration purposes.

:This is how to create the generic PLC model

Important: Each device included in the project that will be using the alternate configuration must have a STAT_PLC model. If the STAT_PLC model is not selectable from the device configuration screen, it can be added to the CIMPLICITY configuration by editing the IC646TME000.MODEL configuration file located in the BSM_DATA subdirectory of the original CIMPLICITY distribution.

Add the following line to the file using a text editor.

MB_TCPIP|STAT_PLC|35

For existing projects

Important: It is strongly recommended that entries in the .ini file be restricted to devices with a model type of STAT_PLC or Generic PLC.

Refer to the product documentation for instructions for creating the STAT_PLC model.

Create the Generic PLC model to use in a project using the Modbus Ethernet protocol

Click Tools>Command Prompt on the Workbench menu bar.

Type cd master in the Command Prompt window and press Enter.

Type idtpop model and press Enter.

Type notepad model.idt and press Enter.

Add the following lines:

MB_TCPIP|Generic PLC|180

MB_TCPIP|STAT_PLC|35

Save model.idt and close the text editor

Type scpop model at the command prompt and press Enter

Close the Command Prompt window

Perform a configuration update in the project's Workbench

Note: The STAT_PLC model sizing is different from the Generic PLC model if you do one of the following

Define the parameter Use These Domain Sizes to be 0

Do not specify all of the domains.

Here are some examples of the PLCs used by smartlogic: 6XV1830-0EH10, 6ES7131-4BF00-0AA0,6ES7193-4CA40-0AA0,6ES7134-4GD00-0AB0,6ES7193-4CA40-0AA0, 6ES7138-4CA01-0AA0,6ES7193-4CC20-0AA0, 6ES7590-1AB60-0AA0, 6ES7511-1AK00-0AB0, 6ES7954-8LP01-0AA0,6ES7155-6AU00-0BN0

ולידציה – FRS for Compliance with 21 CFR Part 11

Functional Requirements Specification -FRS Regarding Requirements for Compliance with 21 CFR Part 11

This FRS presents SmarLogic's functional requirements in response to the User Requirements Specification (URS) . These functional requirements should be met in order to ensure  Control and Monitoring System complies with 21 CFR Part 11.

This FRS must be considered for the system design, build, installation, operation and testing requirements, and for traceability purposes along the product life cycle up to the Operational Qualification (OQ) stage.

                              Responsibility

The Validation Engineer is responsible for writing this protocol. The Control, Automation & Validation Engineer is responsible for ensuring the preparation and approval of this protocol.

The Control Engineer, Division Process Engineer and QA Manager are responsible of approving this document before development and on-site implementations.

The following sections list the functional requirements determined by the relevant groups of the system upgrading. Each functional requirement number is followed by the corresponding user requirement paragraph number for design qualification purpose

                            Glossary

ER – Electronic Record

DB – Database

FRS – functional requirements Specification

HMI – Human/Machine Interface

HSP – High Set-Point

HW – Hardware

IQ –  Installation Qualification

LSP –  Low Set-Point

OQ – Operational Qualification

OS – Operating System

PLC ָָ*- Programmable Logic Controller

QA – Quality Assurance

SCR – Screen

SOP – Standard Operating Procedures

SP – Set-Point

SSO – Schedule of System Operation

SW – Software

TP – Test Point

URS – User Requirements Specification

Requirements for Meeting 21 CFR Part 11

                        Top-Level Requirements

This section covers the proposed solutions for meeting 21 CFR Part 11 presented in the URS for a new WinCC HMI System. This system must allow the :following five main functionalities

Ensure the system integrity

Control the access to the system by logical security

Audit events that create and modify electronic records

Apply electronic signatures to the system

Backup and archive data to ensure record integrity in case of failure

                      Detailed Requirements

This section describes SmartLogic's solutions that will meet the detailed requirements listed in the URS. These requirements are divided into 6 categories for the sake of clarity:

Electronic Records

Security

Audit Trail

Archive

Backup

* Here are some examples of the PLCs used by smartlogic: 6XV1830-0EH10, 6ES7131-4BF00-0AA0,6ES7193-4CA40-0AA0,6ES7134-4GD00-0AB0,6ES7193-4CA40-0AA0, 6ES7138-4CA01-0AA0,6ES7193-4CC20-0AA0, 6ES7590-1AB60-0AA0, 6ES7511-1AK00-0AB0, 6ES7954-8LP01-0AA0,6ES7155-6AU00-0BN0

ולידציה -Testing Process Automation Systems

ולידציה – Testing Process Automation Systems  

 Testing Responsibilities  Supplier and User

Where a Supplier has been assessed and its quality management system found to be acceptable, the User may benefit from the testing already carried out as part of the product development life cycle in order to minimize additional testing.

Similarly, the User may benefit from the testing carried out by the Supplier as part of the application, for example, to allow a small sample of tests to be repeated at witnessed FAT.

                        Example of Life Cycle for a Custom Application

Supplier Product Life Cycle

For example in a life cycle where a new application is developed for an embedded control system,  the Supplier's product may be SW or tools used to develop the User specific application. The Supplier of the control system has been assessed and its quality management system found to be acceptable. Thus, the User does not need to repeat the testing of product functionality. The testing should instead concentrate on the application of the control system

     End User Application Life Cycle

Assuming that the application is critical to product quality and includes a custom sequence coded as a sequence function chart, a test approach could be agreed :that includes the following elements

Supplier's module testing of the sequence function chart program.

Supplier's integration testing (100% test) and FAT (sampled) to demonstrate correct interaction configuration of the control system and correct operation of the process equipment.

SAT and IQ/OQ/PQ of the control system as part of the process equipment.

                      Example of Life Cycle for a Standard Application

                   Supplier Product Life Cycle

In the case  of a life cycle where a standard application is purchased containing an embedded control system, the control system Supplier has been assessed and its quality management system found to be acceptable. Thus, the User does not need to repeat the testing of product functionality (including mature library functions such as standard control modules) or of the standard application. The testing has to cover only the setup and use of the application.

                    End User Application Life Cycle

Assuming that the application is still critical to product quality, there is now much lower risk associated with the application development, as the configuration is limited to selecting the required functions and entering setup parameters. A test approach could, therefore, be agreed including the following elements:

FAT to demonstrate correct setup of the control system and correct operation of the process equipment.

SAT and IQ/OQ/PQ of the control system as part of the process equipment.