Detailed Test Cases. Yes or No?

That is the question.

Is there an unambiguous answer?

Firstly we will go trough two extremes to be able to define optimal approach later.

More QA cartoons here - Cartoon Tester

 

Extreme 1 - Detailed TCs

We all know them - we wrote them, we tested by them, we maintained them - we hate them. Needless to say we will be meeting them for a long time still in one form or another and one can find situations where their existence is justifiable.

Pros

  • Everybody can execute them
  • There is no risk of resourcing
  • Monkeys are cheep and easy to outsource
  • Detailed test cases can be substitution for bad or missing documentation.

Cons

  • It is time consuming to create and maintain them
  • Usually are created by one person (senior) and then executed by "monkeys" - all responsibility is on creator
  • Execution is blind without any thinking, creativity and do not required business understanding.

When to do it

  • If testing is only verification for you
  • You want to outsource testing
  • Customer strictly requires them (in this case you still can do all other QA activities, detailed test cases are something like "must to be done" and you have to count with it during QA estimation).

One story from real life before we will continue.

Customer to Analyst: All deals in England should be priced in CZK.
Analyst to Team: Customer wants to price all deals in England using CZK currency.
Developer to Tester: I implemented that all deals in England are now priced in CZK. You can test it.
Tester to Analyst: I tested it and it works fine - all deals in England are now priced in CZK.
Customer to Analyst (usually few months later): Why are all deals in England priced in CZK?
Analyst to Customer: Because you wanted it. Do you remember our meeting few months ago?
Customer: OK, that was probably slip of the tongue. But you should understand my domain and we always do pricing in local currency.

Looks familiar? Whose fault was that? The most frequented answer is that analyst should be blamed because he is the one who should understand customer business. Really, only him? Then you degrade all other roles to machines - they do what is said to them without any thinking.

 

Extreme 2 - No Test Cases

Knowledge  of customer business domain and full understanding of requirements is key aspect to assure required quality. But do we still need test cases then? Yes we do (at least according to me), but we need to think about their abstraction.

Highest level - no test cases at all

How it works

  • There are no TCs at all
  • Test Scope is defined (I have a good experience with Mind Maps)
  • Test Strategy is defined (how we will test)
  • All test activities are closely connected with specific requirements. It means we are testing against requirements definition itself. Precondition is that they are in high quality (note: high quality does not mean robust) - for example following FRUPS+.
  • Results are provided by the mean of described test scope - you can see status of each area with linked bugs.

Pros

  • No maintenance of test cases
  • Tester has high business knowledge and we can assume he is testing in the similar way as customer will ("walk a mile in customer's shoes")
  • Tester can clarify a lot of things or improve requirements itself. He is also capable of finding missing requirements.

Cons

  • It is very hard to replace current tester.
  • New resource needs a lot of time to be productive.
  • It is almost impossible to have any benefit from newcomer later in the project (typically before UAT).
  • It is hard to demonstrate progress and approaches. Especially when customer finds bugs and asks you why they have not been found during your testing.

When to do it

  • If budget for QA activities is low and you have skilled tester you can trust
  • It is more effective on enterprise project where you do customization for specific customer. You can expect that your testers knows enterprise and how to test it. Customer customizations are than tested only as deltas from standard. But than you should probably have also standard Test Cases for enterprise which can be easily extended.

 

Optimal way - something in the middle

How it works

  • It benefits the pros from both extremes.
  • Knowledge of customer business domain is still key aspect.
  • Scope is defined in abstract way (mind map).
  • Test Cases are based on business flows and "end to end scenarios".
  • Test Cases do not contain details - these are maintained in requirements themselves (user stories, use cases, documents, whatever).

Pros

  • Business domain is described by Test Cases.
  • Test Cases are easy to create and maintain because details are stored out of them.
  • Newcomers can be productive in short time because domain is described by Test Cases.
  • Progress and results are clear and easy to demonstrate.

 

What about an example?

QA process (assuming we do agile)

  1. Sprint planning: everybody agree about basic understanding of each requirement
  2. Tester divides requirements to smaller logical areas using Test Scope (mind map) and than continues with creation of high level test cases.
  3. When developer wants to start with some tasks he needs to discuss it with tester. The main objective is that tester should present how he is going to test it. This helps developer to understand what is "definition of done" for current task.
  4. When developer finished implementation he should again present it to tester. Objective is to go trough implementation to make sure that requirement is fully implemented and all cases are in place. This provides quick feedback and potential issues can be fixed right away before developer starts other tasks - he is still in context.
  5. Tester executes all Test Cases for concrete task and provides results.

Test Scope
Mind Map is the best way how to visualize the scope and provides easy view.

Test Case
Standard structure of Test Case is:

  • Summary - quick description of TC objectives
  • Precondition - known preconditions for TC execution
  • Steps + Expected results - description of flow. What to do, not how to do it.

Test Results

Should be provided on two levels - Current Sprint results + Overall Results

Sprint results

  • Mind Map - visualization of "Passed / Failed / Blocked" or any other concerns and notes
  • Test plan execution - results of each test case
  • List of all found issues classified by severity

Project results

  • High level strategy
  • Overall QA status - number of found bugs with current status classified by severity
  • Bug Trend - Created vs Resolved
  • Optional: List of all issues

Such report provides customer with details per each Sprint but also informs about quality of whole project.

 

Final word

So what is the correct answer for our starting question? We should do only what is really needed and focus on quality itself. It is always about expectations of all stakeholders and we have to negotiate it carefully.

I think that detailed test cases are obsolete and they do not have any added value. They are good if you want to outsource your testing but than you degrade testing to verification.

Other extreme is also not good because it is very risky to do not have any test cases. We always need to find right level of abstraction. That is the answer,

  • bartavoj 02.05.2013 21:36

    Because different people are the best in different skills…
    That is same like in orchestra.

  • fil 28.04.2013 10:11

    Good article. I’m missing one point – why do we need a separate role for development and testing? I somehow feel the communication overhead…

Přidat komentář

* Nezapomeňte na povinné pole