Blog Posts Business Management

Testing from the front lines: myths and reality

Blog: Solitaire Consulting Blog

/*! elementor – v3.19.0 – 29-01-2024 */
.elementor-widget-text-editor.elementor-drop-cap-view-stacked .elementor-drop-cap{background-color:#69727d;color:#fff}.elementor-widget-text-editor.elementor-drop-cap-view-framed .elementor-drop-cap{color:#69727d;border:3px solid;background-color:transparent}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap{margin-top:8px}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap-letter{width:1em;height:1em}.elementor-widget-text-editor .elementor-drop-cap{float:left;text-align:center;line-height:1;font-size:50px}.elementor-widget-text-editor .elementor-drop-cap-letter{display:inline-block}

My colleague Paul Every’s blog on the impact of delayed decisions has got me thinking about the entire cradle-to-grave journey of implementing systems.  The decision to implement a new system is often the beginning of the journey, rather than the end.  It seems to me that, in common with the typical ending of Hollywood films, once the hero rides off into the sunset, or the camera ends with a close-up of the leading lady beaming the final “I do” at the altar, the story is just beginning. 

Where does the hero find a hotel in the middle of the Nevada desert at night? 

What happens to the happy couple when their flight to that idyllic honeymoon location is cancelled due to air traffic controllers striking at an inopportune time? 

The mundane is often more interesting than the exceptional.

/*! elementor – v3.19.0 – 29-01-2024 */
.elementor-heading-title{padding:0;margin:0;line-height:1}.elementor-widget-heading .elementor-heading-title[class*=elementor-size-]>a{color:inherit;font-size:inherit;line-height:inherit}.elementor-widget-heading .elementor-heading-title.elementor-size-small{font-size:15px}.elementor-widget-heading .elementor-heading-title.elementor-size-medium{font-size:19px}.elementor-widget-heading .elementor-heading-title.elementor-size-large{font-size:29px}.elementor-widget-heading .elementor-heading-title.elementor-size-xl{font-size:39px}.elementor-widget-heading .elementor-heading-title.elementor-size-xxl{font-size:59px}

The User Acceptance Testing phase

When implementing or upgrading a system, the User Acceptance Testing phase is often rushed, overlooked, or ignored altogether.  In some cases this is wholly justified (when Microsoft release their regular patches to the operating system, it is unreasonable to expect all the activities we normally carry out on our PC to be tested thoroughly), but when a critical application such as a document management system or an ERP solution is upgraded, at some point we need to carry out some testing, and this is where the misunderstandings often begin.

Testing is the phase of the project where we can determine if the product and its implementation can fulfil its brief. 

At its heart it is a simple concept: 

  • decide what you are expecting,
  • define some test conditions, execute them,
  • gather evidence of success or non-compliance,
  • assess the severity,
  • fix, rinse and repeat. 

The way in which this activity is managed, recorded, and adapted for future use is the testing phase of the project, and is the point where the rubber meets the road.  At its most basic, I would evaluate the level of resource and time expended on this activity in the same way as I view backups, insurance, or a life vest: I have never woken up in the middle of the night in a panic because my life vest was too well-built, or because I have too many backups. Obviously in the real world we inhabit trade-offs in the Time / Cost / Quality triangle are inevitable, but the importance of testing should be ignored at our own peril.

The numbers involved in some extreme cases can be eye-watering:  in April 2015 Bloomberg terminals in London crashed due to a software glitch that affected 300,000 traders. A government £3Bn debt sale had to be postponed. An airbag software sensor failure caused 1M Nissan cars to be recalled in 2014. Testing is not a guarantee that such events do not occur, but if the defects are going to occur, it is best that they are discovered in a safe testing environment rather than when in the real world (or in outer space, in the case of the Hubble space telescope). To be clear, a testing phase that identifies a critical flaw and stops a project is a successful phase, in that it has protected us from greater hazard.

 In my experience, testing programmes often struggle because of some basic misconceptions:

  • Myth 1 – Testing is a one-off activity to demonstrate the application is error-free.

    There are actually two misconceptions here:  testing is neither discrete, nor can it ever demonstrate that an application is free of defects. The two go hand-in-hand: taking the latter point first, testing can, at best, demonstrate that the way in which we use a particular piece of software to achieve our goals, performs in the way we expect. In order to do this, we need to be clear about what it is we expect it to do, and we can then reproduce these conditions in a “safe” sandbox. Which takes us to the first part: the documentation of our business processes, and the interaction with the software is not always well documented or readily available. This means that the testing phase is often the first time we collate in one place our expectations of the application’s behaviour. If I’d had a pound for every time I have encountered the phrase “it needs to work in the same way as the old system”, I would probably be on a tropical beach rather than in my home office! A description of the testing conditions and scenarios should be a living document, linked to project development and change control, so that when our processes evolve, our test cases follow suit, ready for the next round of testing. In practice the development of detailed test cases is usually done as part of the testing phase, putting at risk the critical “coverage” concept (the degree to which test cases adequately reflect the entirety of our operations). 

    Myth 2 – User Acceptance Testing requires detailed step-by-step scripts.

    As ever, testing is a “horses for courses” activity. There are times when exact step by step instructions and expected results are defined in great detail, and others when a simple one-liner is adequate. This needs to reflect the expectations of the target audience. I have seen examples where a script simply says: “Create and approve a timesheet” and others that have been accompanied by ten pages of screenshots.  The important point is that the result should be unequivocal. The level of detail often depends on the type of project that is involved. Which conveniently takes me to a corollary of this myth: “often the perfect is the enemy of the good”. In 1990 NASA launched the Hubble space telescope. Two months later a “spherical aberration” was discovered on its main mirror (in plain English, it needed glasses). A full optical test before launch would have cost $100M, but was not carried out, although a much simpler test could also have identified the flaw, had it been carried out.  

    Myth 3 – Testing is a technical activity that IT can perform.

    As with all good myths, this has some basis in truth. IT often are the de-facto experts and are usually able to carry out basic “skim testing” that can weed-out basic configuration errors before more experienced testers can get to work. This is both time-effective and provides a level of “four-eye” testing that is advisable. But ultimately only someone who can adequately represent the end-user can assess objectively if a product meets the expectations.

    Myth 4 – Testing can be carried out by anyone.

    Like with the previous myth, there is a kernel of truth, in that once a script has been defined, and any prerequisites documented, the actual tasks of carrying out the test can be carried out by anyone with the requisite access. This is not the whole story however and would miss out on the “off piste” element of testing. A tester should be free to explore areas at the fringes of the test. An experienced user will recognise (and test) exceptions to the normal “happy path” sequence that the test designer envisaged. It is often these meandering unscripted variations on the original test script that test the product better than any formal script could. A tester should first and foremost be knowledgeable about the process involved. I have seen teams of testers parachuted into a project to execute hundreds of scripts, who missed some obvious aspects because they lacked the real-world context that was needed. 

    Myth 5 – Testing is expensive.

    This myth has equivalents in many walks of life and reminds me of the adage by oil well fire-fighter Red Adair: “If you think it’s expensive to hire a professional, wait until you hire an amateur”.   While testing is an activity that requires an intimate understanding of your own internal business processes, there is a level of expertise that your partners can bring to bear in designing and managing this phase, and at Solitaire Consulting we have several decades’ worth of collective experience in this field and would be happy to discuss your specific requirements and challenges. This topic will be the subject of one of our “Lunch and Learn” sessions on 26th February, where we will look at testing strategies, governance, tools, and processes used in testing:  if this is an area that interests you, please look out for further details!

    Myth 6 – The product is fit for purpose:  it is used by dozens of other customers.

    This can be a difficult one, in that vendors (and customers) often genuinely believe that if a product has been in the market for a number of years, the bulk of its defects have been ironed out. This is possibly (and hopefully) true, but it is best not to assume that others will use it in the same way as us. The mantra of user acceptance testing can best be described in the words of Ronald Reagan, when discussing the process involved in nuclear disarmament treaties: “trust but verify”.

Having established that testing is important, how to proceed? If the organisation is sufficiently mature that we have at our disposal resources that have experience in testing projects, they should be involved in the design of the solution at the earliest opportunity. If this is not the case, there are people who specialise in managing these phases: please come and talk to me or one of my colleagues at Solitaire Consulting who will be glad to discuss your requirements.

/*! elementor – v3.19.0 – 29-01-2024 */
.elementor-column .elementor-spacer-inner{height:var(–spacer-size)}.e-con{–container-widget-width:100%}.e-con-inner>.elementor-widget-spacer,.e-con>.elementor-widget-spacer{width:var(–container-widget-width,var(–spacer-size));–align-self:var(–container-widget-align-self,initial);–flex-shrink:0}.e-con-inner>.elementor-widget-spacer>.elementor-widget-container,.e-con>.elementor-widget-spacer>.elementor-widget-container{height:100%;width:100%}.e-con-inner>.elementor-widget-spacer>.elementor-widget-container>.elementor-spacer,.e-con>.elementor-widget-spacer>.elementor-widget-container>.elementor-spacer{height:100%}.e-con-inner>.elementor-widget-spacer>.elementor-widget-container>.elementor-spacer>.elementor-spacer-inner,.e-con>.elementor-widget-spacer>.elementor-widget-container>.elementor-spacer>.elementor-spacer-inner{height:var(–container-widget-height,var(–spacer-size))}.e-con-inner>.elementor-widget-spacer.elementor-widget-empty,.e-con>.elementor-widget-spacer.elementor-widget-empty{position:relative;min-height:22px;min-width:22px}.e-con-inner>.elementor-widget-spacer.elementor-widget-empty .elementor-widget-empty-icon,.e-con>.elementor-widget-spacer.elementor-widget-empty .elementor-widget-empty-icon{position:absolute;top:0;bottom:0;left:0;right:0;margin:auto;padding:0;width:22px;height:22px}

Join us for a special Lunch & Learn Testing Session

How do we ensure that the solution is fit for purpose, or rather how do we identify the missing or defective elements before we are running with live data and real customers?

If you are a:

  • Project sponsor
  • Project manager
  • Business analyst
  • Test team member
  • Developer
  • Anyone with oversight of a testing effort

 

Join us on Monday 26th February from 12.30pm-2pm

The post Testing from the front lines: myths and reality appeared first on Solitaire Consulting.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/testing-from-the-front-lines-myths-and-reality/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×