Test Management

Test Estimation
Test Metrics
COQ / Variance Metrics
Release Metrics
Software Testing Tools
CAPA
Testing Verticals
Testing Infrastructure
Software Review Inspection
Software Review/Inspection Process

Software Test Release Metrics

Some of the Software test release related metrics are as below. However they vary from company to company and project to project.

Test Cases executed

General Guidelines:
1. All the test cases should be executed atleast once. 100% test case execution.
2. Pass test cases >= 98% (this number can vary).
 

Effort Distribution

General Guidelines:
1. Sufficient effort has been spent on all the phases, components/modules of the software/product under test.
2. This needs to be quantified as (Effort spent per module / Effort spent on all modules)*100
Example: effort needs to be quantified for each phase like Requirements analysis, Design(test cases design), execution, etc.
 

Open Defects with priority

General Guidelines:
1. If we plot a graph with number of open defects against time, it should show a downward trend.
2. There should not be any open show stoppers/blockers before release. So 0 blockers in open state.
3. Close to 0 Critical/major defects before release. However this is never 0, as these fixes will be postponed to the next release as long as they are ok to have.
 

Defect Removal Efficiency %(DRE%)

Definition: - DRE % indicates the effective identification and removal of defects both at phase-level and project-level

Conclusion from DRE %: -
If Over-all Project DRE is between 90%-100%, then the efficiency is said to be at High/Good level.
If Over-all Project DRE is between 75%-90%, then the efficiency is said to be at Medium/Moderate level.
If Over-all Project DRE is below 75%, then the efficiency is said to be at Low/Alarming level.

Note- This percentage may vary from organization to organization and project to Project.

The DRE for each phase and overall project is calculated and given to the management at the end of the project.

DRE % = (Defects Removed during a Phase)/(Defects Removed till date) X 100

Example:

Phase Introduced...Requirement...Design...Code/Unit Test
Phase Deducted
Requirement.........10............--.......--
Design..............3.............18.......--
Coding..............0.............4........26
Testing.............2.............5........8
Customer............1.............2........7

DRE % Requirement Phase = 10/(10+3+0+2+1) x 100 = 62.50 %

DRE % Design Phase = (3+18)/(3+0+2+1+18+4+5+2) x100 = 60.00 %

DRE % Coding Phase =(0+4+26)/(0+2+1+4+5+2+26+8+7) x100 = 54.54 %

DRE % Testing Phase =(2+5+8)/(2+1+5+2+8+7) x 100 = 60.00 %



Defect Removal Efficiency %(DRE%) for the project

DRE % = (Total No. Of Defects before Release or Delivery) / (Total No. Of Defects for the Project) X 100

For the above mentioned example,

Over-all DRE % = (10+3+2+18+4+5+26+8) / (10+3+2+1+18+4+5+2+26+8+7)
x 100 = 88.37 %

In other words,

Defect Removal Efficiency = D1 / (D1 + D2)
D1 = defects found before implementation
D2 = defects found after implementation by the customer



Effort Metrics

Effort Slippage%= (Actual Effort – Planned Effort)/(Planned Effort) X 100

Note:

  • The Efforts are always in terms of Hours
  • If the outcome is negative, we conclude that the effort in the project is low/less.
  • If the outcome is positive, we conclude that the effort in the project is high/more.
  • In both cases please specify the reason while giving metrics to Management.

 

Schedule Metrics

Schedule Slippage% = (Actual Schedule-Planned Schedule)/Planned Schedule X 100

Note:

  • The schedule is always in terms of no. of days/hours.
  • If the outcome is negative, conclude that effects put in the project is less.
  • If the outcome is positive, conclude that efforts put in the project are more.
  • In both cases please specify the reason while giving metrics to management.