Blog Posts Process Analysis

Project Delivery Process D800

Blog: Biz-Performance, David Brown

D800 – User / System / Acceptance and Integration Testing

SIIPS Delivery Processes (D).png

DEFINITION

SIIPS D800.pngPlan, prepare and conduct all forms of functional testing – eg User Testing, System Testing,  Acceptance Testing, Integration Testing, Parallel Running, Model Office.

SUMMARY

Testing is essential at several stages in the delivery of new systems.  The way in which testing is organised will vary according to:
  • business needs,
  • constraints of the system’s design,
  • constraints of staff resourcing,
  • timing requirements, and
  • cutover strategy.
These needs should be considered and an approach should be agreed.  The results will be defined in an Implementation Paper.

Types of functional testing may include:

  • Informal testing – tests or prototyping exercises performed during design and development which do not have formal testing controls applied to them (and are not therefore defined in this process)
  • Configuration Testing – testing that configuration settings comply to business needs.  In short do the new table value match the business requirements.
  • Unit Testing – Formal Tests applied to each “unit” of functionality within the system
  • System Testing – complete set of formal functional tests covering all aspects of functionality within the system being built
  • Integration Testing – Test of the entire overall business solution including the passage of data to and from other integrated systems
  • Parallel Running – Additional form of testing whereby the results are compared with identical data passing through the old systems
  • Model Office – informal testing where users try out the system as if it were real
  • User Acceptance Testing – if user management were not involved in the formal testing of the system or have demanded specific tests to demonstrate the system is functioning to their satisfaction, there may be a need for further acceptance tests
  • Data Load / Data Conversion tests – tests that data prepared for cutover to the new system is acceptable.
Note that this process addresses functional testing – technical testing is covered in Process D810.
The approach defined in the Implementation Paper will subsequently be prepared, conducted, reviewed and agreed as appropriate.
Each type of formal testing should be prepared in advance.  The project team should normally work with the user manager primarily responsible for the given area to ensure that the tests are acceptable.  Preparation would normally comprise:
  • definition and agreement of the objectives for the phase of testing
  • definition and agreement of the objectives for each test
  • detailed definition of each test,
  • expected results for each test as appropriate.
Tests will be performed in a controlled manner.  All incidents should be reported, logged and investigated.  If corrections are applied, the test must be repeated along with any other test that could have been affected by the changes applied.  Results will be reviewed by the responsible user manager and signed off.  External and internal auditors may also be involved if appropriate.

PATH PLANNING GUIDANCE

Normal Practice

DEPENDENCIES

Prerequisites (Finish-Start):
  • Delivery Approach Definition (DAD) or similar high-level definition of the overall business requirements and solution
Prerequisites (Finish-Finish):
  • all elements of design and development
Dependent procedures (Finish-Start):
  • data load and live running

RECEIVABLES

  • relevant Implementation Papers defining the system’s design

DELIVERABLES

  • Implementation Paper – Approach to testing / Test Plan
  • Test Objectives
  • Test Definitions
  • Test Control Log
  • Test Incident Reports
  • Test Incident Control Log
  • Test Sign offs

TOOLS

  • Guidelines: Testing Standards and Procedures
  • Guidelines: Layman’s guide to programming
  • Example: System Test Signoff Letter
  • Example: Test Conditions Worksheet

DETAILED DESCRIPTION OF TASKS

The Approach to Testing Implementation Paper

The overall approach to formal functional testing is considered in an environmental implementation paper.  In a similar fashion to other implementation papers, it will review the requirements and options relating to testing, then state and justify a recommended approach.  The detail of this approach will amount to a Test Plan for the project.
Requirements, options, and the agreed approach may need to be discussed and agreed with several parties.  In particular, key input may be required from:
  • the project team members who understand in detail the design and scope of the system
  • the project sponsor and other key decision makers within the client organisation who should approve the overall approach and define any specific requirements they have concerning format and signoff etc,
  • the various managers responsible for specific aspects of the overall system,
  • the client organisation’s external auditors who may have requirements so that they will be satisfied that the organisation can rely on the new systems,
  • the client organisation’s internal auditors.

Requirements for testing

The overall objective of all forms of testing is to prove that the system is suitable for live usage.  This would normally involve testing all reasonable aspects of expected usage of the system, including anticipated abnormal events such as user and data errors.  In terms of the effort involved, there is usually far more work required to cover the abnormal situations than the routine processing.
It is probably not possible, and certainly not reasonable, to test every single set of circumstances that can arise.  The testing needs to strike a reasonable balance between comprehensive coverage and risk.
The project team should primarily address those aspects of the system over which they have some influence.  They should test the way in which the package has been set up and any specific modifications they have made.  The team should focus on whether the functionality within the overall business solution is meeting the client organisation’s defined needs.  They should not attempt to test all the program code included in the basic underlying packaged software – that is primarily the vendor’s responsibility and represents a large aspect of the price paid for the packaged software.
Examples of typical rules that might be agreed to define the extent of coverage of functional tests:
  • pick at least one case of each valid transaction type
  • take the valid cases through a complete lifecycle, eg
    • creation,
    • amendment,
    • update,
    • processing,
    • archiving,
    • deletion.
  • take the system through its main types of update run, eg
    • first day of the month,
    • normal daily update,
    • month end,
    • year end,
    • first day of new year
  • test the validation rules on each field of each screen involved, eg mandatory / optional / hidden, error messages – fatal / warning etc
  • where allowed, check that transactions can be reversed with the correct effect on all system data and audit trails
  • pick at least one case of each error type for which there is a specific defined error handling process (in both real-time and batch updating if appropriate)
  • test at least one case for each record type where the fields of data are all at their maximum and/or minimum values
  • put through sufficient transactions such that all reports have at least three records per sub-total and three sub-totals per total
  • generate at least one example of each report (or section of report)
  • generate at least one page per report where there is more than a page of data within one sub-total
  • process data from interfaces
    • normal correct data with all normal record types,
    • with reconciliation errors,
    • with fields blank or missing
    • with invalid data in fields (eg alpha characters in numeric fields
    • with more than one run’s data in a single update run
    • check reversing out the update in the case of feeder system errors
  • for each type of electronic output of the system, test each record type, eg interface tapes, Electronic Data Interchange (EDI) links, automated credit transfers etc
The extent of coverage of the tests should be balanced against the risks involved – clearly the navigation systems on the space shuttle deserve more attention than a typing tutor program.  Aspects to consider include:
  • risks of human injury, eg wrong diagnosis of illness, danger of injury from automated equipment
  • risks of successful fraudulent use, eg poor controls, log in security etc
  • risks of accidental financial loss, eg overpayments, failure to identify debtors
  • risks of loss of customer confidence, eg wrong information on statement, bad presentation of materials, late deliveries
  • risks of inefficient working, eg difficult to find information, time delays
  • risks of poor staff morale, eg system difficult or unrewarding to operate
For each of these factors, the magnitude and likelihood of the failure should be balanced against the costs and time required to perform comprehensive testing.

Options

Testing will normally be conducted in several phases comprising different types of testing.  Types of functional testing may include:
Type
Definition
Comments
Informal tests or prototyping
Testing elements of the design to validate that the concept of their use is practical .
Such tests are normally performed during design and development and do not have formal testing controls applied to them and are not therefore defined in this process.
Configuration Testing
Configuration testing involves testing that configuration / parameter settings comply to business needs.  In short do the new table value match the business requirements.
This is similar to the use of informal testing techniques to ensure that the design meets the business needs.  It is, however, performed with planning, consideration and preparation of what the overall needs are and how they can be verified.
Unit Testing
Formal Tests applied to each “unit” of functionality within the system.
Can be combined with system testing.
Program tests, link tests, unit tests, module tests etc
Tests of custom programming work – various types will be defined according to the project’s defined approach to custom development.
Any custom programming work should be properly tested before the custom module is tested as part of the overall system.
System Testing
Complete set of formal functional tests covering all aspects of functionality within the system being built.
This is normally the main overall test of functionality.
Integration Testing
Test of the entire overall business solution including the passage of data to and from other integrated systems.
This confirms that the functionality remains valid when all aspects of the system have been combined.  Testing of the modules together, using predetermined scripts and data, will enable the accurate tracking of data through business processes.  Known/expected results will allow for proper analysis of testing successes and failures.
A minimum of three integration test cycles should be performed for each module added into the application. This will allow for redressing any problems and reiteration of the testing scripts to clear errors.
Parallel Running
Form of testing whereby the results on the new system are compared with identical real data passing through the old systems.  This is normally achieved by duplicating the transactions for a specific time period and reconciling the results with the existing system.
Very often it is not possible to get parallel results because the new system is not a carbon copy of the  old one.  Even if it is possible, parallel running usually requires a great deal of user effort to do things in duplicate and to reconcile the results.  It is also common for problems to be detected in the old system as well as the new one.
Model Office or “Simulated Live” Testing
Informal testing where users try out the system as if it were real, testing the weekly /monthly /yearly systems processes, operational support and the manual procedures operate correctly and work in harmony using simulated normal work and volumes.
This is not a “mathematical” way of testing the system – there is no real control over the coverage of the tests.  It does, however, allow users to build up experience and confidence in the new system and frequently identifies a few final problems with the system.  This technique can also identify problems where the original specifications were unsuitable.
User Acceptance Testing
Similar to System Testing and Integration testing in nature – testing of the individual module by the users to validate that the system operates correctly and meets requirements.
If user management were not involved in the formal testing of the system or have demanded specific tests to demonstrate the system is functioning to their satisfaction, there may be a need for further acceptance tests.  In other cases, this type of testing adds nothing to the Systems Testing and Integration Testing (provided those tests were recognised by the users as being for acceptance purposes).
Parallel Pilot
Test running of a small part of the overall business on the new system to test it under real conditions.  Differs from Parallel Running in that not all input need be duplicated with the existing system and there is no attempt to reconcile the results between the two systems in a controlled manner.
Useful for locating practical problems, particularly ones due to poor specification or design (which will not normally be located from formal testing).  It probably requires less additional staffing than Parallel Running, but still does require additional resources, often at an already busy time.  It cannot prove that results are compatible with the old systems as input and output are not strictly duplicated and controlled.
Live Pilot
Live running of a small part of the overall business on the new system to test it under real conditions before transferring the remainder of the enterprise to the new system.
Very useful where it is practical.  It identifies practical problems and allows the team to concentrate on a small user population for training, coaching, support etc.  The test population may, however, be subjected to increased risks and pressures.  There may be a risk that they have to revert to the old system in the event of major difficulties.  Check that it is practical to phase in the live running for only one part of the overall solution.
Data Load / Data Conversion tests
Tests that data prepared for cutover to the new system is acceptable.
Normally required – should be signed off by the responsible users before live running commences.  Note, data is rarely perfect so the tests should seek to prove it is acceptable rather than correct.
Regression testing
The repeating of previous tests to check that the system is still performing the same way.
Not normally a formal testing phase as such – used when something has been changed to check that there is no unexpected damage to other parts of the system.  Where tests were fully automated and well documented, this may be achieved with relatively little effort, but manual repetitions of the various testing programmes can be very time consuming and may take more effort in total than the original testing.
Volume testing
Creating sufficient transactions and file sizes to simulate normal and peak work loads thus verifying that response times and processing times will be satisfactory and that file sizes are sufficiently large.  This also gives firm indications of likely timings thus allowing effective run scheduling.
Normally covered in the technical testing – see Process D810
Operations testing
Testing of job processing, system backup and recovery procedures etc
Normally covered in the technical testing – see Process D810
Fallback testing
Tests the contingency plan for reverting to the old system in the event of a failure of the new one.
May be a wise precaution if the cutover plan allows a fallback contingency.
Disaster testing
Alternate processing in case of a system failure.
Normally covered in the technical testing – see Process D810
There will often be technical options open to the project team.  Such options may include:
  • simulation of real-time users using special software on the central computer system – this would normally allow for the capture of transactions, duplication of transactions etc, followed by their playback as if they were created by genuine on-line users,
  • simulation of real-time users using PCs to generate transactions, often by capturing real-time conversations and replaying them repeatedly upon demand as if they were real users at real terminals connected via the communications network,
  • tools to generate test data in databases, files and masterfiles,
  • use of external special facilities, eg validating the format of bankers’ clearing house tape,
  • simulation of remote computer facilities and communications links.
It is often advantageous to adopt approaches that allow for the easy controlled repetition of tests.  This allows test cycles to be repeated until the results are entirely satisfactory.  It may subsequently also be useful for regression testing and for future phases of development.  However, such approaches can often require expensive tools and substantial effort in setting them up.
There is usually a choice to be made whether the testing team should be separate from the development team.  Clearly the developers will have the best understanding of how to perform the tests and what they expect to happen.  Independent testers, however, provide an independent view and are particularly useful for validating the user procedures and User Manual as they will not know how to do things without reference to the documentation.  In a very large project, a system testing team can work in parallel with the main development team to prepare tests and the testing environment.  The choice will depend on the business needs, costs, benefits and risks.
It may also be appropriate to involve internal and external auditors in the definition, conduct and review of tests.  This may avoid any risk that the auditors might find the new system unacceptable or insufficiently proven.  It may also be possible to use junior audit staff to perform some of the work, particularly where it involves the preparation of test cases, the entry of data in a carefully controlled manner and the detailed checking of actual results against expected results – audit staff are often very good at such tasks.

Recommended approach

There is no advantage in repeating tests which have already been satisfactorily performed.  Accordingly, it is good practice to minimise the testing to as few tests and as few cases as possible, provided the defined requirements are met.
Although many types of testing may be viewed as important, the minimum recommended approach would comprise:
  • System Testing – formal test of each functional area, reviewed and signed off by the responsible user manager
  • Integration Test – test of all aspects of the system working together including links with external systems
  • Data Load – test that the starting data on the live system has been set up satisfactorily
Any other type of functional testing would normally require special justification since it is usually possible to address all normal needs with the types described above.

Detail of approach – Test Plan

The detail of the approach may be laid out as a test plan showing the main phases of testing and their timing.  Activities are considered in the following sections.

Test Environment

It is normal to set up special environments so that testing can be performed in a “clean” environment.  Tests should not suffer from unpredictable results due to other uses being made of the same computer environment, eg data, program versions.
It should also be possible to change versions and scenarios freely to allow efficient progress, particularly when parts of a test need to be repeated after a problem has been resolved.  Because of the amount of system changes that can take place during system testing, backup and recovery procedures should be clearly defined and rigidly enforced.
A well-designed testing environment plus the defined test base may also be useful for validating future system releases and ongoing enhancements.

Responsible User Managers

The user manager who will review and sign off each specific aspect of the testing should be identified and agreed.  The responsible user will work with the project team to see that the tests are comprehensive and satisfactory.  They will need to participate actively, for example, to:
  • ensure that the definition of the tests provide comprehensive and effective coverage of all reasonable aspects of functionality,
  • check the expected results have been properly calculated,
  • to review the impact of any incidents, problems and issues raised and to assist in their resolution
  • to ensure that the final outcomes of the tests are satisfactory.
They will “accept” the results on behalf of the client organisation.

Test Leader

The person responsible for the detailed definition and execution of each part of the testing should be identified and agreed.  Often, but not necessarily, this will be a member of the project team, such as the team leader for the corresponding area of development.  Their responsibility is to progress the testing until its satisfactory conclusion.  They do not necessarily do the work themselves.
It is the responsibility of the Test Leader to:
  • Consult with the responsible user manager and all interested parties to agree the definition of the objectives, and test details for the area of testing.
  • Manage the preparation of test scripts.
  • Manage the conduct of the tests, including detailed planning, booking resources (human/environmental), dependencies with development or other testing activities.
  • Progress any Test Incident Control forms that are produced.
  • Review and agree the results of the testing with the responsible user and other interested parties.
  • Ensure the overall delivery of the successfully completed test.

Quality Assurance

A policy on Quality Assurance should be established.  Typically the Project Manager or a specialist will review the conduct of the tests and results for quality.  The client organisation may also require its own further review of the testing process and results.  It may also be necessary or appropriate to involve internal or external auditors in the review.

Test Objectives

Responsible User Managers and Test Leaders for each part of the testing should have been defined and agreed.  It should now be established that these parties, plus any other defined “interested parties”, have the authority to define and approve the formal testing of the system.
The overall test phases should be broken down into major topics, the objectives,  purposes and contents of which are clearly stated such that they can be reviewed by the Responsible User Manager and other interested parties.  At this stage it will be possible to review whether the coverage of the tests is appropriate.
Test objectives for the defined areas of testing can be listed on the Test Objectives form.

Test Definitions

Testing should encompass all system functions and processes that will be used in production.  This may include for example:
  • System functions and calculations
  • System tables and parameter settings
  • Content and format of all screens and reports
  • Content of all databases and output files
  • Accuracy of program linkages and system interfaces
  • Accuracy and efficiency of the data entry, processing and operating controls
  • Physical and logical system security
  • Hardware and communication operations ( e.g., remote devices and communications, backup and recovery, local and remote printing)
  • Accuracy and efficiency of system job streams
  • System performance with normal and peak volumes
Where possible, it may be convenient to divide the overall testing into a number of cycles, each building on the results of the previous cycle.  In this way typical life cycles can be followed, for example:
  • the life cycle of business transactions, eg new client, corrections to standing data, sales orders, corrections, deliveries, invoice, overdue payment processing, credit stop, part payment, account close, etc.
  • time sequences, eg new month, normal day, month end, year end
  • sequences of interfaces in and out of the system,
  • day in the life of the various users concerned.
When detailing the tests, a number of specific test cases would be defined to evolve throughout this defined sequence of processing cycles.  These cycles would be run and repeated as necessary until adequate results have been achieved.
Test definitions may be set out on Test Definition form.  They should explain the content of the test in language suitable for an intelligent (but non-expert) tester to follow.  They should refer to draft documentation for detail wherever possible, thus both testing the documentation and reducing the work required.  They should not take short cuts by leaving the tester to think of specific ways to perform the test.  For example “make sure the tax calculation is  correct” is a bad test – the calculation should normally be predictable because the data to be used should have been specified.
Test definitions should be used to document all aspects of the required system including:
  • real time conversations,
  • batch processes,
  • operational, security and control processes,
  • manual processes.
The expected result for each test defined should be clear.  In some cases the required result is self evident, eg “Check password is not displayed”.  In all other cases the expected result should be documented alongside the detail of the test.  This might be in a summary form (eg “fatal error”) or in a detailed manner (eg the expected total on a report).  Where a large amount of data is concerned, the details can be cross-referred to attached spreadsheets.
When completed, the test definitions should be reviewed with the Responsible User to ensure that they are acceptable.

Conducting the tests

Tests should be conducted as rapidly and efficiently as possible.  There will inevitably be many aspects of testing running together and many issues to resolve.  This requires good management and control.  Standards and procedures should be defined for the testing activities.  An example is contained in the Guidelines document – “Testing Standards and Procedures”.
Test runs should be logged in a Test Control Log (for example see TFORM3).  This is used to keep track of progress per test and to control dependencies in the testing process.
There will be numerous cases where tests are not completed as expected.  Each time this occurs a Test Incident Report (eg TFORM4) should be completed unless the situation is immediately resolvable without adjusting the test definition (eg tester had not logged in with the required security level even though the test was correctly defined).
The progress of test incidents should be tracked and controlled in a Test Incident Control Log (eg TFORM5).
Testing will normally lead to a high volume of incident reports.  The large majority of these will present no real cause for concern.  They most likely to be caused by:
  • misunderstanding of the expected results, or
  • miscalculation of the results (eg due to unexpected additional data having been entered in a different test).
In such cases, the remedial action will be simply to correct the test documentation.
In some cases, action may be required to change or correct the design documentation.  This would need to be subjected to the project’s defined change control protocol.
In a minority of cases some action may be required to change the system itself.  In such cases, change control procedures should again be followed.  However, it is important that such changes can be made without undue delay as they could hold up the system testing process.
Changes to the system may comprise:
  • correct or change the parameters
  • correct or change the  master data
  • correct programming errors in custom modules
  • correct programming errors in the vendor’s software.
If external bodies should be involved in performing a required correction (eg separate MIS department team or the package’s vendor), the appropriate liaison route, error reporting procedures and escalation processes should be followed, but the overall progress and resolution will still be tracked by the team.
Where the system is changed in any way, previous tests should be repeated if they might have been affected by the changes.  This should be identified on the Test Incident Report form when the incident is analysed and diagnosed.  The failed test must also be repeated, reviewed and signed off in the usual way.  Progress will be tracked through the Test Incident Control Log.

When areas of testing have been reviewed, the formal sign-offs should be recorded in sign-off documents.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/project-delivery-process-d800/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×